THE AGE OF INTELLIGENT MACHINES | Footnotes
September 24, 2001
- author |
- Ray Kurzweil
The Second Industrial Revolution
1. Duncan Bythell, The Handloom Weavers: A Study in the English Cotton Industry during the Industrial Revolution, p. 70.
2. Bythell, “The Coming of the Powerloom,” The Handloom Weavers, pp. 66-93.
3. Malcolm I. Thomis has written a sound documentation of this important historical movement in The Luddites: Machine-Breaking in Regency England.
4. See, for example, Sir Percy Snow, speaker, “Scientists and Decision Making,” in Martin Greenberger, ed., Computers and the World of the Future, p. 5; and Langdon Winner, “Luddism as Epistemology,” in Autonomous Technology, pp. 325-395.
5. Ben J. Wattenberg, ed., The Statistical History of the United States from Colonial Times to the Present.
6. U.S. Department of Commerce, Bureau of the Census, Statistical Abstract of the United States, 1986, 106th ed., p. 390; see also U.S. Bureau of the Census, How We Live: Then and Now.
7. Ben J. Wattenberg, ed., The Statistical History of the United States from Colonial Times to the Present.
8. U.S. Department of Commerce, Bureau of the Census, Statistical Abstract of the United States, 1986.
9. Ben J. Wattenberg, ed., The Statistical History of the United States from Colonial Times to the Present, p. 224.
10. U.S. Department of Commerce, Bureau of the Census, Historical Statistics of the U.S.: Colonial Times to 1970, vol. 1; and National Center for Education Statistics, U.S. Department of Education, 1986.
11. U.S. Department of Commerce, Bureau of the Census, Historical Statistics of the U.S.: Colonial Times to 1970, vol. 1.
12. U.S. Department of Commerce, Bureau of the Census, Historical Statistics of the U.S.: Colonial Times to 1970, vol. 1.
13. Wassily W. Leontief, The Impact of Automation on Employment 1963-2000.
14. Wassily W. Leontief, The Impact of Automation on Employment, 1963-2000.
15. This phenomenon is discussed at length by Barry Bluestone and Bennett Harrison, in The Deindustrialization of America; also, see Lester Thurow “The Surge in Inequality,” Scientific American, May 1987, 30-38; and Harrison and Bluestone, The Great U-Turn.
16. Tom Forester surveys the cost and power trends in the computer revolution in High-Tech Society, pp.21-41.
17. Edward Feigenbaum and Pamela McCorduck discuss the impact of expert systems on the field of molecular biology in The Fifth Generation, p. 66. Sketches of computer-assisted diagnostic programs presently in use can be found in Katherine Davis Fishman, The Computer Establishment, pp. 361-366; see also Roger Schank, A Cognitive Computer: On Language, Learning, and Artificial intelligence, pp. 231-234.
18. Ben J. Wattenberg, ed., The Statistical History of the United States from Colonial Times to the Present, series F, pp. 1-5, 1965.
19. David L. Parnas delivers one perspective on this topic in “Computers in Weapons: The Limits of Confidence,” in David Bellin and Gary Chapman, eds., Computers in Battle -Will They Work? pp. 209-231; also of interest is a statement on future prospects for AI by Robert Dale, in Allen M. Din, ed., Arms and Artificial Intelligence, p. 45.
20. This possibility may be more hypothetical than real because of the close relationship between manufacturing and services. Loss of manufacturing in key areas, for example, could be perilous to next-stage prospects for innovation. For an analysis of these and related problems, see S. S. Cohen and J. Zysman, Manufacturing Matters.
21. Translated from the Russian, “SAM” means surface-to-air missile, or literally, fixed maintenance depot to air.
22. See Tom Athanasiou, “Artificial Intelligence as Military Technology,” in Bellin and Chapman, eds., Computers in Battle.
23. SCI is aimed toward the use of advanced computing to develop weapons and systems “for battle management in complex environments where human decision-making [isl seen to be inadequate” (Allan M. Din, ed., Arms and Artificial Intelligence, p. 7; see also pp. 90-91 in the same volume).
What Is AI, Anyway?
1. Similar definitions are found in many standard textbooks on AI.
2. This conference was originally called the Dartmouth Summer Research Project on Artificial Intelligence; for a full account of this landmark event, see Pamela McCorduck, Machines Who Think, pp. 93 ff.
3. Norbert Weiner, the famous mathematician who coined this term (later supplanted by the term “artificial intelligence”), was clearly fond of the meaning of its Greek root, “kubernetes”: pilot or governor.
4. These terms were introduced by Edward Feigenbaum; see his “Art of Artificial Intelligence: Themes and Case Studies in Knowledge Engines,” in AFIPS Conference Proceedings of the 1978 National Computer Conference 47: 227-240.
5. Roger Schank, The Cognitive Computer, pp. 49-51.
6. The layman may also want to see Susan J. Shepard, “Conversing with Tiny ELIZA,” Computer Language 4 (May 1987). See also notes 61 and 62 to chapter 2.
7. See Hans Berliner, “New Hitech Computer Chess Success,” AI Magazine 9 (Summer 1988): 133. And, for a brilliant discussion of machine versus human intelligence in chess and of dangers of rigidity in “learning machines,” see Norbert Weiner, discussant, “Scientists and Decision Making,” in Martin Greenberger, ed., Computers and the World of the Future, pp. 23-28.
8. See Lofti Zadeh, “Fuzzy Sets,” in Information and Control8: 338-353. See also a fascinating interview with Zadeh published in Communications of the ACM, April 1984, pp. 304-311, in which he discusses the inadequacy of precise AI techniques and tools to solve real life (“fuzzy”) problems.
9. See Sigmund Freud, The Psychopathology of Everyday Life, in The Basic Writings of Sigmund Freud; see also his Collected Papers; for another point of view, see Carl Jung et al., Man and His Symbols; and for a shorter but broad overview on the subject, see William Kessen and Emily D. Cahan, “A Century of Psychology: From Subject to Object to Agent,” American Scientist, Nov.-Dec. 1986, pp. 640-650.
10. Newell’s fullest and most current vision can be found in John E. Laird, A. Newell, and Paul S. Rosenbloom, “SOAR: An Architecture for Intelligence” (University of Michigan Cognitive Science and Machine Intelligence Laboratory Technical Report no. 2, 1987).
11. See Richard Dawkins’s defense of Darwinism in The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design; for some classic arguments on design versus necessity, also see A. Hunter Dupree, in Ada Gray, ed., Darwiniana, pp. 51-71.
12. This subject is eloquently addressed in a slim volume (23 pages) by S. Alexander, Art and Instinct.
13. To some, of course, the concept of God is not applicable to Buddhism; see William James, The Varieties of Religious Experience, pp. 42-44 and 315.
14. Charles Darwin, The Origin of the Species. In this, his classic work on natural selection and evolution, Darwin states, “If it could be demonstrated that any complex organ(ism) existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely breakdown” (p 229).
15. Richard Dawkins, The Blind Watchmaker, pp. 112-113.
16. Note, for example, a compelling argument against this notion (which instead champions the notion of hierarchy in evolution) in Stephen Jay Gould, “Is a New and General Theory of Evolution Emerging?” Paleobiology 6 (1979): 19-130.
17. See Gould, Paleobiology 6 (1979): 119-130. Also, in Stephen Jay Gould, The Mismeasure of Man, pp. 326-334, mention is made of “human nature” in relation to the concept of natural selection. See also Richard Dawkins, The Blind Watchmaker, pp. 141-142.
18. This idea is supported, at least in theory, by some pioneers of AI; see, for example, Lawrence Fogel, Alvin Owens, and Michael Walsh, Artificial Intelligence through Simulated Evolution, pp. viii and 112.
19. Edward Fredkin of MIT is credited with saying, “Artificial intelligence is the next step in evolution” in Sherry Turkle, The Second Self, p. 242.
1. The literature on mind as machine is extensive. One provocative work is Daniel C. Dennett’s Brainstorms: Philosophical Essays on Mind and Psychology. Dennett, a philosopher, draws upon the achievements of AI to formulate a new theory of mind. An important survey of philosophical issues can be found in Margaret Boden’s Artificial Intelligence and Natural Man, chapter 14; see also Pamela McCorduck, Machines Who Think. A brief, useful summary of trends is the introduction (“Philosophy and AI: Some Issues”) to Steve Torrance, ed., The Mind and the Machine: Philosophical Aspects of Artificial Intelligence. For the philosophy-AI nexus, see the papers in Martin D. Ringle, ed., Philosophical Perspectives in Artificial Intelligence. Another important source is John Haugeland, ed., Mind Design. The legacy of the mind-body problem is related to contemporary AI debates in succinct, lively fashion by Paul M. Churchland in Matter and Consciousness.
2. Some theorists who have argued for the mind-beyond-machine approach are J. R. Lucas, Hubert Dreyfus, and John Searle. Lucas, in 1961, used Gödel’s incompleteness theorem to argue that computers could never model the human mind successfully; see his “Minds, Machines and Gödel,” Philosophy 36 (1961): 120-124. For a refutation of this position, see Dennett’s Brainstorms, chapter 13. Dreyfus’s famous critique of AI is What Computers Can’t Do: The Limits of Artificial Intelligence. Searle distinguishes between the capacities of “weak AI” and “strong AI” in his 1980 paper “Minds, Brains, and Programs,” The Behavioral and Brain Sciences 3 (1980): 417-424. (Searle’s paper is reprinted as chapter 10 of Haugeland’s Mind Designs.) Here Searle introduces his famous “Chinese room” example to criticize what he sees as the “residual behaviorism” of AI. A particularly useful review of criticism of AI is J. Schwartz, “Limits of Artificial Intelligence,” in Stuart C. Shapiro, ed., Encyclopedia of Artificial Intelligence, vol. 1. The Encyclopedia is an excellent general source.
3. See Boden’s discussion, in Artificial Intelligence, pp. 21-63, of Colby’s attempt to develop a computational model of human emotions; on pages 440-444 she argues that emotions are not “merely” feelings; in What Computers Can’t Do, Dreyfus argues from a phenomenological standpoint that computers can never simulate our understanding, in part because of our capacity to experience emotions. Dennett provides an intriguing discussion of the matter in chapter 11 (“Why You Can’t Make a Computer That Feels Pain”) of Brainstorms.
4. According to Dreyfus in What Computers Can’t Do, “The story of artificial intelligence might well begin around 450 B.C.,” when Plato expressed the idea that “all knowledge must be stateable in explicit definitions which anyone could apply” (p. 67).
5. For an overview see D. A. Rees, “Platonism and the Platonic Tradition,” The Encyclopedia of Philosophy, vol. 6, pp. 333-341 (New York: The Macmillan Company, 1967).
6. See Thomas L. Hankins, Science and the Enlightenment. See also Ernst Cassirer, The Philosophy of the Enlightenment; the first three chapters provide an important overview of the new studies of mind and how they reflected methods of the new science.
7. See Reinhardt Grossman, Phenomenology and Existentialism: An Introduction; the critiques of AI mounted by Hubert and Stuart Dreyfus have their roots in phenomenology. Hubert Dreyfus has developed the Heideggerian notion that understanding is embedded in a world of social purpose, which cannot be adequately represented as a set of facts. Stuart Dreyfus emphasizes the importance of skills that elude representations and rules by drawing upon the existential phenomenology of Merleau-Ponty. See H. Hall, “Phenomenology,” in Shapiro, Encyclopedia of Artificial Intelligence, vol. 2, pp. 730-736.
8. See A. J. Ayer, “Editor’s Introduction,” in A. J. Ayer, ed., Logical Positivism, pp. 3-28; Rudolf Carnap, “The Elimination of Metaphysics through Logical Analysis of Language,” in Ayer’s Logical Positivism, pp. 3-81; Noam Chomsky, Syntactic Structures (1957). For a review of Chomsky’s achievement and influence, see Frederick J. Newmeyer’s Linguistic Theory in America, 2nd ed., chapter 2, “The Chomskyan Revolution.”
9. This debate, in its technical and personal dimensions, is described in some detail in McCorduck’s Machines Who Think.
10. Plato’s works are readily available in Greek and English in the Loeb Classical Library editions; some other English translations of individual works are mentioned below. An excellent place to begin is any of several reference works: Gilbert Ryle, “Plato,” in The Encyclopedia of Philosophy, vol. 6, pp. 324-333; D. J. Allan, “Plato,” in The Dictionary of Scientific Biography, vol. 11, pp. 22-31 (New York: Charles Scribner’s Sons, 1975). A more detailed account can be found in J. N. Findlay, Plato and Platonism: An Introduction.
11. In Aristotle: The Growth and Structure of His Thought, chapters 2 and 3, G. E. R. Lloyd describes Aristotle as both a pupil and a critic of Plato.
12. See “The Greek Academy,” in The Encyclopedia of Philosophy, vol. 3, pp. 382-385. The Academy is also treated in Ryle’s “Plato,” pp. 317-319. In his excellent survey, A History of Greek Philosophy, vol. 4, p. 19, W. K. C. Guthrie explains that the Academy was by no means like our modern university: it had religious elements we might more readily associate with a medieval college. Volume 4 of this survey is devoted to Plato; the Academy is discussed on pp. 8-38. The early years of Plato’s Academy are described in the reprint edition of Eduard Zeller’s 1888 classic, Plato and the Older Academy. See note 17 below.
13. Guthrie (vol. 4, pp. 338-340) points out that Plato was influenced by the mystery religions of his day, especially in the Phaedo.
14. Plato describes the movements of the planets in important passages in the Republic and the Timaeus; In Plato‘s Timaeus, pp. 33-35, Francis Cornford provides a useful summary of the kinds of motion Plato describes in the Timaeus. G. E. R. Lloyd has a lucid and concise discussion of Plato’s astronomy in chapter 7 of Early Greek Science: Thales to Aristotle, pp. 80-98. The nature of Plato’s astronomy, long a controversial subject for the history of science, is analyzed in John D. Anton’s Science and the Sciences in Plato.
15. The myth of Er in the Republic (617-618) was Plato’s version of a scheme originally developed by the Pythagorean philosopher Philolaus, who put fire at the extremity and at the center of the universe, thus displacing the earth from its central position (G. S. Kirk, J. E. Raven, and M. Schofield, The Presocratic Philosophers, p. 259). The Pythagorean concept of a central fire is described by S. Sambursky in The Physical World of the Greeks, pp. 64-66.
16. The discovery of irrational numbers eventually resulted in the rejection of a Pythagorean “geometric atomism” and led to the concept of the continuum (S. Sambursky, The Physical World of the Greeks, pp. 33-35). In Plato’s Theaetetus, the mathematician Theodorus demonstrates the irrationality of nonsquare numbers up to the root of 17. Plato then claims that the roots of all numbers that are not squares are irrational. According to G. E. R. Lloyd (Early Greek Science: Thales to Aristotle, pp. 32-34), the irrationality of the square root of 2 was known even before the time of Plato. The Greeks commonly expressed the proof in geometrical terms, by showing that the diagonal of a square is not commensurable with its side. (The proof assumes this commensurability, then shows that it leads to an impossibility because the resulting number is both odd and even.) The discovery that some magnitudes are incommensurable (c. 450-441 B.C.) is attributed to Hippacos of Mepontum, a member of the Pythagorean Brotherhood, in Alexander Helleman and Bryan Bunch, The Timetables of Science, p. 31.
17. E. R. Dodds, The Greeks and the Irrational is a classic treatment of this subject. Ananke is described in detail in F. M. Cornford, Plato‘s Cosmology, pp. 159-177. A more recent work is Richard R Mohr’s Platonic Cosmology.
18. In the Phaedo and The Republic, Plato opposes the activity of intellect to the “brutish” passivity of desire (Martha Nussbaum, “Rational Animals and the Explanation of Action,” in The Fragility of Goodness: Luck and Ethics in Greek Tragedy and Philosophy, p. 273). In this book Nussbaum explores the antithesis in Greek philosophy between the controlling power of reason and events beyond one’s control, an antithesis central to Plato’s dialogues.
19. The first mention of the Forms is in the Phaedo; an excellent discussion can be found in Gilbert Ryle’s article (pp. 320-324) in The Encyclopedia of Philosophy.
20. Plato’s theory of matter in the Timaeus, where the smallest particles are triangles, is a blend of Pythagorean ideas and Democritan atomism (see S. Sambursky, The Physical World of the Greeks, p. 31).
21. Cornford, in Plato‘s Cosmology, pp. 159-177, provides a lucid discussion of this tension between necessity and reason.
22. On the dialog as Plato’s chosen form, see D. Hyland’s “Why Plato Wrote Dialogues,” Philosophy and Rhetoric 1 (1968): 38-50.
23. Physicist Werner Heisenberg describes how he arrived at his uncertainty principle, which he formulated in 1927 in chapter 6 of his gracefully written and entertaining volume Physics and Beyond: Encounters and Conversations. Heisenberg was influenced by Plato’s corpuscular physics, and he explores the relation between Plato’s ideas and quantum theory in chapter 20, “Elementary Particles and Platonic Philosophy (1961-1965).”
24. A refreshing new interpretation of the Phaedrus emphasizing the role of paradox is Martha Nussbaum’s “‘This Story Isn’t True’: Madness, Reason, and Recantation in the Phaedrus,” chapter 7 in The Fragility of Goodness, pp. 200-228.
25. D. A. Rees, “Platonism and the Platonic Tradition,” p. 336. It was Xenocrates, who headed the Academy after the death of Speusippus, Plato’s immediate successor, who identified the Platonic Ideas with mathematical numbers, not the “ideal” numbers postulated in the Academy under Plato and discussed in the Phaedo. The fates of the various forms of Platonism are reviewed in several brief articles in the Dictionary of the History of Ideas (New York: Charles Scribner’s Sons, 1973), vol. 3: John Fisher’s “Platonism in Philosophy and Poetry,” pp. 502-508; John Charles Nelson’s “Platonism in the Renaissance,” pp. 508-515; and Ernst Moritz Manasse’s “Platonism since the Enlightenment,” pp. 515-525.
26. D. H. Fowler, in The Mathematics of Plato‘s Academy, reconstructs in detail the curriculum of the Academy. A particularly readable account of the work of the geometers can be found in chapter 3 of Francois Lasserre, The Birth of Mathematics in the Age of Plato. A more technical treatment can be found in chapter 3 of Wilbur Richard Knorr, The Ancient Tradition of Geometric Problems.
27. For a general overview of Plato’s philosophy of numbers, see “Plato,” The New Encyclopedia Britannica, vol. 14, p. 538. For the text of the Epinomis in Greek and English, see W. R. M. Lamb, ed., Plato, Loeb Classical Library, vol. 8. In the Epinomis, 976 D-E, the speaker asks what science is indispensable to wisdom: “it is the science which gave number to the whole race of mortals.” See also R. S. Brumbaugh, Plato‘s Mathematical Imagination.
28. A superb introduction to Enlightenment thought is Peter Gay’s two volumes, The Enlightenment: An Interpretation, vol. 1, The Rise of Modern Paganism and vol. 2, The Science of Freedom.
29. The definitive biography is Richard Westfall’s Never at Rest: A Biography of Isaac Newton. No one interested in Isaac Newton’s scientific achievement should fail to see I. Bernard Cohen’s Newtonian Revolution. Those who wish to tackle Newton in the original should see Isaac Newton’s Philosophiae Naturalis Principia Mathematica, 3rd edition (1726), assembled by Alexander Koyré, I. Bernard Cohen, and Anne Whitman.
30. Otto Mayr, Authority, Liberty, and Automatic Machinery in Early Modern Europe.
31. A useful overview of Descartes’s life and work can be found in The Dictionary of Scientific Biography, vol. 4, pp. 55-65. Descartes, by Jonathan Rée, is unsurpassed in giving a unified view of Descartes’s philosophy and its relation to other systems of thought.
32. The brief Discours de la Méthode appeared in 1637 and is written in a lively autobiographical manner. It is readily available in the Library of the Liberal Arts edition, which includes the appendixes in which Descartes introduced analytic geometry and his theory of refraction: Discourse on Method, Optics, Geometry, and Meteorology, trans. by Paul J. Olscamp.
33. Derek J. de Solla Price, “Automata and the Origins of Mechanism and Mechanistic Philosophy,” Technology and Culture 5 (1964): 23.
34. See I. Bernard Cohen on Newton in the Dictionary of Scientific Biography, vol. 10, pp. 42-103, and Cohen’s Newtonian Revolution, mentioned above.
35. Charles Gillispie, The Edge of Objectivity, p. 140. The resulting prestige of science during the Enlightenment is treated in chapter 5.
36. For a readable and lucid introduction to relativity, see the 1925 classic by Bertrand Russell, The ABC of Relativity, 4th rev. ed. A more detailed treatment may be found in Albert Einstein, Relativity: The Special and General Theory, a Popular Exposition, trans. Robert Lawson.
37. Gillispie, The Edge of Objectivity, pp. 145-150.
38. Leibniz’s criticism of the watchmaker God can be found in a letter written in November 1715 to Samuel Clarke (1675-1729), a renowned disciple of Newton (see pp. 205-206 of Leibniz’s Philosophical Writings, G. H. R. Parkinson, ed.). For the famous debate this letter initiated, see The Leibniz-Clarke Correspondence, H. G. Alexander, ed.
39. W. T. Jones, Kant and the Nineteenth Century, p. 14. The legacy of Descartes is expressed in Kant’s own definition of the Enlightenment, which is quoted by Ernst Cassirer in The Philosophy of the Enlightenment, p. 163: “Enlightenment is man’s exodus from his self-incurred tutelage. Tutelage is the inability to use one’s understanding without the guidance of another person. This tutelage is self-incurred if its cause lies not in any weakness of the understanding, but in indecision and lack of courage to use the mind without the guidance of another. ‘Dare to know’ (sapere aude)! Have the courage to use your own understanding; this is the motto of the Enlightenment.”
40. Immanuel Kant, Critique of Pure Reason, 1st ed. 1781; Prolegomena to Any Future Metaphysics, 1st ed. 1783. The relations between Kantian philosophy and science are explored in Gordon G. Brittan, Jr., Kant’s Theory of Science.
41. A brief history of logical positivism can be found in A. J. Ayer, Logical Positivism, pp. 3-28. Moritz Schlick, center of the Vienna Circle in the 1920s, compares the Kantian and positivist treatments of reality in “Positivism and Realism,” an essay published in 1932 or 1933 and reprinted in Ayer’s Logical Positivism (see p. 97).
42. Ayer, in Logical Positivism, p. 11, points out the positivist nature of Hume’s attack on metaphysics and then claims that he could well have cited Kant instead, “who maintained that human understanding lost itself in contradictions when it ventured beyond the bounds of possible experience.” Ayer claims that “the originality of the logical positivists lay in their making the impossibility of metaphysics depend not upon the nature of what could be known but upon the nature of what could be said” (Logical Positivism, p. 11).
43. Norman Malcolm, Ludwig Wittgenstein: A Memoir, with a Biographical Sketch by Georg Henrik Von Wright, p. 10. Whereas Kant distinguished between what can and cannot be known, Wittgenstein distinguished between what can and cannot be said. See “The Tractatus,” chapter 6 of W. T. Jones, The Twentieth Century to Wittgenstein and Sartre.
44. Ludwig Wittgenstein, Tractatus Logico-Philosophicus, trans. by D. F. Pears and B. F. McGuiness, first German edition published in 1921.
45. Malcolm, Ludwig Wittgenstein, pp. 11-12.
46. Wittgenstein, Tractatus, p. 37.
47. Wittgenstein, Tractatus, p. 115.
48. Wittgenstein, Tractatus, p. 115.
49. For a readable discussion of the Church-Turing thesis, see David Harel’s Algorithmics: The Spirit of Computing, pp. 221-223. The Church-Turing thesis, named after Alonzo Church and Alan Turing, is based on ideas developed in the following papers: Alan Turing, “On Computable Numbers with an Application to the Entscheidungsproblem,” Proc. London Math. Soc. 42(1936): 230-265; Alonzo Church, “An Unsolvable Problem of Elementary Number Theory,” Amer. J. Math. 58 (1936): 345-363.
50. See, for example, statement 4.002 in Wittgenstein’s Tractatus, p. 37.
51. Wittgenstein, Tractatus, pp. 7, 151.
52. Ludwig Wittgenstein, Philosophical Investigations, trans. G .E .M. Anscombe.
53. Michael Dummett, in his “Frege and Wittgenstein” (in Irving Block, ed., Perspectives on the Philosophy of Wittgenstein, pp. 31-42), argues that Wittgenstein tried and failed to provide a theory of language in Philosophical Investigations.
54. In the preface to Philosophical Investigations (p. vi), Wittgenstein claims that he recognized “grave mistakes” in his earlier work, the Tractatus. The more atomistic approach of the latter is challenged by a greater emphasis on contexts in Philosophical Investigations. Anthony Kenny compares the two works in “Wittgenstein’s Early Philosophy of Mind,” Block, ed., Perspectives, pp. 140-147. A. J. Ayer remarks that Wittgenstein “modified the rigors of his early positivism” as expressed in the Tractatus. (See Ayer’s Logical Positivism, p. 5).
55. In the Preface to his 1936 work Language, Truth and Logic, p. 31, Alfred Ayer asserts that his views stem from the writings of Russell and Wittgenstein.
56. See Reinhardt Grossman, Phenomenology and Existentialism.
57. Tractatus, p. 151.
58. Hubert L. Dreyfus, “Alchemy and Artificial Intelligence,” The RAND Corporation, December 1965, publication 3244. For a profile of Dreyfus, see Frank Rose, “The Black Knight of AI,” Science 85, 6 (March 1985): 46-51.
59. Pamela McCorduck, Machines Who Think, p. 204. McCorduck devotes chapter 9 (“L’Affair Dreyfus”) to an engaging history of Dreyfus’s critique and the reactions it provoked in the AI community.
60. ELIZA was first announced in Joseph Weizenbaum’s “ELIZA-A Computer Program for the Study of Natural Language Communication between Man and Machine,” Communications of the Association for Computing Machinery 9 (1966): 36-45. Hubert Dreyfus stumped ELIZA by entering the phrase “I’m feeling happy,” and then correcting it, by adding “No, elated.” ELIZA responded with “Don’t be so negative,” because it is programmed to respond that way whenever “no” appears anywhere in the input. See Hubert Dreyfus and Stuart Dreyfus, “Why Computers May Never Think like People,” Technology Review 89 (1986): 42-61.
61. ELIZA mimics a Rogerian psychotherapist, whose technique consists largely of echoing utterances of the patient; it therefore uses very little memory, and arrives at its “answers” by combining transformations of the “input” sentences with phrases stored under keywords. Its profound limitations were acknowledged by its creator. In his 1976 work, Computer Power and Human Reason, Weizenbaum argues that ELIZA’s limitations serve to illustrate the importance of context for natural language understanding, a point made in his original paper. He chose this kind of psychotherapeutic dialog precisely because the psychotherapist in such a dialog need know practically nothing about the real world. See Margaret Boden, Artificial Intelligence and Natural Man, p. 108.
62. Dreyfus developed this argument in detail in What Computers Can’t Do: The Limits of Artificial Intelligence. There he sets out objections to “the assumption that man functions like a general-purpose symbol-manipulating device” (p. 156). Especially drawing his ire was the work of Allen Newell and H. A. Simon, Computer Simulation of Human Thinking, The RAND Corporation, P-2276 (April 1961).
63. PROLOG, a language based upon logic programming, was devised by Colmerauer at Marseille around 1970 (see W. F. Clocksin and C. S. Mellish, Programming in PROLOG).
64. Fuzzy logic, developed by L. A. Zadeh, guards against the oversimplification of reality by not assuming all fundamental questions have yes or no answers. See E. H. Mamdani and B. R. Gaines, Fuzzy Reasoning and Its Applications.
65. See, in particular, the introduction to the revised edition, in Hubert Dreyfus, What Computers Can’t Do.
66. Dreyfus’s predictions about the limitations of chess-playing programs have been proven wrong time and again. Chess-playing programs have improved their performance through the application of greater and greater computational power. One of the latest benchmarks occurred when HiTech won the Pennsylvania State Chess Championship in 1988. See Hans Berliner, “HITECH Becomes First Computer Senior Master,” AI Magazine 9 (Fall 1988): 85-87.
67. McCorduck, Machines Who Think, p. 205.
68. In What Computers Can’t Do Dreyfus argues, “There is no justification for the assumption that we first experience isolated facts, or snapshots of facts, or momentary views of snapshots of isolated facts, and then give them significance. The analytical superfluousness of such a process is what contemporary philosophers such as Heidegger and Wittgenstein are trying to point out” (p. 270).
69. Hubert L. Dreyfus and Stuart E. Dreyfus, “Making a Mind versus Modeling the Brain: Artificial Intelligence Back at a Branchpoint,” Daedalus 117 (Winter 1988): 15-43. This issue of Daedalus, devoted to AI, was subsequently published in book form; see Stephen R. Graubard, ed., The Artificial Intelligence Debate: False Starts, Real Foundations.
70. Dreyfus and Dreyfus, “Making a Mind,” p. 15.
71. Jack Cowan and David H. Sharp review the importance of neural nets for AI in “Neural Nets and Artificial Intelligence,” Daedalus 117 (Winter 1988): 85-121.
72. Dreyfus and Dreyfus, “Making a Mind,” pp. 38-39.
73. See “The Role of the Body in Intelligent Behavior,” chapter 7 of Hubert Dreyfus’s What Computers Can’t Do.
74. Sherry Turkle also explores children’s responses to computers in her 1984 work, The Second Self: Computers and the Human Spirit, chapter 1, “Child Philosophers: Are Smart Machines Alive?”
75. Sigmund Freud, Jokes and Their Relation to the Unconscious, 1st ed, 1905. Marvin Minsky provides a new interpretation of jokes, emphasizing the importance of “knowledge about knowledge,” in his “Jokes and the Logic of the Cognitive Unconscious,” in Lucia Vaina and Jaakko Hintikka, eds., Cognitive Constraints on Communication, pp. 175-200.
1. For the relationship between logic and recursion, see Stephen Cole Kleene, “I-Definability and Recursiveness,” Duke Mathematical Journal2 (1936): 340-353. See also Stephen Cole Kleene, Introduction to Metamathematics. For Rosser’s contribution, see J. Barkley Rosser, “Extensions of Some Theorems of Gödel and Church,” Journal of Symbolic Logic 1 (1936): 87-91. Church has made many important contributions to logic and computation. A coherent presentation of his work appears in Alonzo Church, Introduction to Mathematical Logic, vol. 1.
2. For the flavor of this theory, see a classic text on numerical analysis and computation: R. W. Hamming, Introduction to Applied Numerical Analysis.
3. A good example of such thinking is Bertrand Russell, Introduction to Mathematical Philosophy.
4. The paradox was first introduced in Bertrand Russell, Principles of Mathematics, 2nd ed., pp 79-81. Russell’s paradox is a subtle variant of the Liar Paradox. See E. W. Beth, Foundations of Mathematics, p. 485.
5. Gottlob Frege was about to publish a monumental work on arithmetic and set theory when Russell pointed out the implications of his paradox. Frege could only add a postscript that said, “A scientist can hardly meet with anything more undesirable than to have the foundations give way just as the work is finished. In this position I was put by a letter from Mr. Bertrand Russell.” See Bertrand Russell, Letter to Frege, 1902, published in Jean van Heijenoort, ed., From Frege to Gödel.
6. Bertrand Russell, Principles of Mathematics, 2nd ed., 1938, pp. 10-32, 66-81.
7. Bertrand Russell, Principles of Mathematics, 2nd ed., pp. 10-32, 66-81.
8. See Bertrand Russell, Principles of Mathematics, 2nd ed., pp. v-xiv.
9. See also Alfred N. Whitehead and Bertrand Russell, Principia Mathematica, 3 vols., 2nd ed., pp. 187-231.
10. First introduced in Alan M Turing, “On Computable Numbers with an Application to the Entscheidungsproblem,” Proc. London Math. Soc. 42 (1936): 230-265.
11. Work on PROLOG began in 1970. A clear presentation of the conceptual foundations of PROLOG appears in Robert Kowalski, “Predicate Logic as a Programming Language,” University of Edinburgh, DAI Memo 70, 1973. See also Alain Colmerauer, “Sur les bases théoriques de Prolog,” Groupe de IA, UER Luminy, Univ. d’Aix-Marseilles, 1979. This and other aspects of the Japanese program are discussed in Edward Feigenbaum and Pamela McCorduck, The Fifth Generation, p. 115.
12. These early experiments are described in A. Newell, J. C. Shaw, and H. Simon, “Empirical Explorations with the Logic Theory Machine,” Proceedings of the Western Joint Computer Conference 15 (1957): 218-239.
13. Turing’s theoretical model was first introduced in Alan M. Turing, “On Computable Numbers with an Application to the Entscheidungsproblem,” Proc. London Math. Soc.” 42 (1936): 230-265.
14. An enormously influential paper is Alan M. Turing, “Computing Machinery and Intelligence,” Mind 59 (1950): 433-460, reprinted in E. Feigenbaum and J. Feldman, eds., Computers and Thought.
15. The program is called the “Turochamp” (evidently a contraction of “Turing and Champernowne”). See Andrew Hodges, Alan Turing: The Enigma, pp. 338-339.
16. Turing researched morphogenesis deeply enough to produce a paper on the subject: Alan M. Turing, “The Chemical Basis of Morphogenesis,” Phil. Trans. Roy. Soc. 1952: B237.
17. See Andrew Hodges, Alan Turing: The Enigma, pp. 267-268.
18. For an engineering account of this project, see B. Randell, “The Colossus” (1976), reprinted in N. Metropolis, J. Howlett, and G. C. Rota, eds., A History of Computing in the Twentieth Century.
19. See David Hilbert, Grundlagen der Geometrie, Liepzig and Berlin, 1899, 7th ed., 1930.
20. Alan M. Turing. “On Computable Numbers with an Application to the Entscheidungsproblem,” Proc. London Math. Soc. 42 (1936): 230-265.
21. Simpler models that have appeared since have perhaps been ignored. See Marvin Minsky, Computation: Finite and Infinite Machines.
22. This thesis was independently arrived at by both Church and Turing around 1936.
23. For an excellent article on the theory of computation, see John M. Hopcroft, “Turing Machines,” Scientific American, May 1984, pp. 86-98.
24. The busy beaver problem is one example of a large class of noncomputable functions, as one can see from Tibor Rado, “On Noncomputable Functions,” Bell System Technical Journal4l, no. 3 (1962): 877-884.
25. Church’s version of the result appears in Alonzo Church, “An Unsolvable Problem of Elementary Number Theory,” American Jour. Math 58 (1936): 345-363.
26. We can see Gödel’s concerns about Russell’s framework in Kurt Gödel, “Russell’s Mathematical Logic” (1944), in P. A. Schilpp, ed., The Philosophy of Bertrand Russell.
27. Gödel’s incompleteness theorem first appeared in: Kurt Gödel, “Über formal unenscheiderbare Satze der Principia Mathematica und verwandter Systeme I,” Monatsh. Math. Phys. 38 (1931): 173-198.
28. See Alonzo Church, “A Note on the Entscheidungsproblem,” Journal of Symbolic Logic 1 (1936): 40-41, and Kurt Gödel, “On Undecidable Propositions of Formal Mathematical Systems,” mimeographed report of lectures at the Institute for Advanced Study, Princeton, 1934.
29. Herbert A. Simon, The Shape of Automation for Men and Management (Harper & Row, 1965), p. 96.
30. A short reflection by Turing on some of the issues behind thinking machines appears as part of chapter 25 of B. W. Bowden, ed., Faster than Thought.
31. For an introductory account of some of the implications of the Church-Turing thesis, see Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid, pp. 559-586.
The Formula for Intelligence
1. See Morris Kline, Mathematics and the Search for Knowledge.
2. See Albert Einstein, Relativity: The Special and the General Theory. A more readable account is presented in Bertrand Russell, ABC of Relativity See also A. Einstein, “Zur Elektrodynamic bewegter Korpen,” Annalen der Physik 17 (1905): 895, 905.
3. For the mathematically mature, an excellent introduction can be found in Enrico Fermi, Thermodynamics (Englewood Cliffs, N.J.: Prentice-Hall, 1937).
4. Atkins gives an account of thermodynamics and entropy that is fascinating and informal yet scholarly in P. W. Atkins, The Second Law.
5. A glimpse into the complexity is presented in Allan C. Wilson, “The Molecular Basis of Evolution,” Scientific American, October 1985, pp. 164-173.
6. See Rudy Rucker, The Five Levels of Mathematical Reality, pp. 14-35.
7. The motivations and quests for anthropomorphic parallels are considered in John D. Barrow and Frank J. Tipler, The Anthropic Cosmological Principle, pp. 1-23.
8. See Robert P. Crease and Charles C. Mann, The Second Creation, pp. 393-420.
9. For one contribution to a “theory of everything,” see Stephen Hawking, A Brief History of Time. A more popular discussion is given in Heinz R. Pagels, Perfect Symmetry, pp. 269-367.
10. In 1666 Gottfried Leibniz contemplated a scientific system of reasoning, the “calculus ratiocinator,” that could be used to settle arguments formally. George Boole took up this problem and presented his work in 1854 in An Investigation of the Laws of Thought on Which Are Founded the Mathematical Theories of Logic and Probabilities, aspects of which are discussed in the next few pages
11. See Douglas Hofstadter, Gödel, Escher, Bach. An Eternal Golden Braid, pp. 559-586.
12. Meindl, James D. “Chips for Advanced Computing,” Scientific American, October 1987, p. 78.
13. For a detailed treatment, see Carver Mead and Lynn Conway, Introduction to VLSI Systems. Old but nonetheless broadly relevant is the article Ivan E. Sutherland and Carver A. Mead, “Microelectronics and Computer Science,” Scientific American, September 1977, pp. 210-228.
14. For a discussion that is less philosophical than that of Hofstadter, see Rudy Rucker, The Five Levels of Mathematical Reality, pp. 207-249.
15. See John M. Hopcroft, “Turing Machines,” Scientific American, May 1984, p. 91.
16. See A. Newell and H. A. Simon, “GPS: A Program that Simulates Human Thought,” in E. A. Feigenbaum and J. Feldman, eds., Computers and Thought, pp. 71-105, and Claude Shannon, “A Chess Playing Machine,” Scientific American, October 1950.
17. For a sketch, see Patrick H. Winston, “The LISP Revolution, ” BYTE, April 1985, p. 209.
18. For this reason we’ve been more successful in building checkers programs. See Arthur L. Samuel, “Some Studies in Machine Learning Using the Game of Checkers,” (1959), reprinted in E. A. Feigenbaum and J. Feldman, eds., Computers and Thought, pp. 279-293. An early note is given in Claude Shannon, “Programming a Computer for Playing Chess,” Philosophical Magazine, series 7, 41 (1950): 256-275.
19. This and some of the other formulations discussed here have been examined in depth by researchers in game theory. A seminal work in the area is R. D. Luce and H. Raiffa, Games and Decisions. The famous Minimax theorem itself was presented in J. von Neumann, “Zur Theorie der Gesellschaftespiele,” Mathematische Annalen 100 (1928): 295-320.
20. This serves to show that in theory a computer can be as good as any human chess player.
21. Researchers have tried various strategies to get around the problems created by this combinatorial explosion in the number of possible chess moves at each stage. See Peter Frey, “An Introduction to Computer Chess,” in Peter Frey, ed., Chess Skill in Man and Machine, and also M. M. Botvinnik, Computers in Chess, pp. 15-21.
22. A. K. Dewdney, “The King Is Dead, Long Live the King,” Scientific American, May 1986, p. 13.
23. See Gregory Chaitin, “On the Difficulty of Computation,” IEEE Transactions on Information Theory 16 (1970): 5-9, and Gregory Chaitin, “Computing the Busy Beaver Function,” IBM Watson Research Center Report, RC 10722, 1970. An easy introduction to certain aspects of computability and complexity is in Michael R. Garey and David S. Johnson, Computers and Intractability.
24. See, for instance, the piece by M. A. Tsasfman and B. M. Stilman in M. M. Botvinnik, ed., Computers in Chess. Also see Carl Ebeling, All the Right Moves, pp. 56-64.
25. A lucid presentation on the two positions can be found in Carl Ebeling, All the Right Moves, pp, 1-3.
26. Compare the various strategies and systems described in Peter Frey, ed., Chess Skill in Man and Machine.
27. A recent report on HiTech is Hans Berliner, AI Magazine, Summer 1988.
28. The structure of HiTech is well documented in Carl Ebeling, All the Right Moves.
29. H. A. Simon and Allen Newell, “Heuristic Problem Solving: The Next Advance in Operations Research,” Operations Research 6 (January-February 1958).
30. A report of the system’s performance is given in Danny Kopec and Monty Newborn, “Belle and Mephisto Dallas Capture Computer Chess Titles at the FJCC,” Communications of the ACM, July 1987, pp. 640-645.
31. In 1988 HiTech became the first system to beat a human chess grandmaster, albeit one who has been out of form. See Harold C. Schonberg, New York Times, September 26, 1988.
32. See W. Daniel Hillis, “The Connection Machine,” Scientific American, June 1987.
33. Eliot Hearst, “Man and Machine: Chess Achievements and Chess Thinking,” in Peter Frey, ed., Chess Skill in Man and Machine.
34. A useful examination of the psychology of chess-playing in the light of the performance of chess programs is given in Brad Leithauser, “Computer Chess,” New Yorker, May 9, 1987, pp. 41-73. See also the article by Hearst, cited in note 33.
35. An excellent survey is in Geoffrey C. Fox and Paul C. Messina, “Advanced Computer Architectures,” Scientific American, October 1987, pp. 66-74. The flurry of research activity is evident from Richard Miller (project manager), Optical Computers: The Next Frontier in Computing, vols.1 and 2 (Englewood, N.J.: Technical Insights, 1986).
36. Even the early checkers programs were quite good. See Pamela McCorduck, Machines Who Think, pp.152-153.
37. H. J. Berliner, “Backgammon Computer Program Beats World Champion,” Artificial Intelligence 14, no.1 (1980).
38. See H. J. Berliner, “Computer Backgammon,” Scientific American, June 1980.
39. The number of possible moves at each point is estimated at 200 for go. See E. Thorp and W. E. Walden, “A Computer-Assisted Study of Go on M by N Boards,” in R. B. Banerji and M. D. Mesarovic, eds., Theoretical Approaches to Non-numerical Problem-Solving (Berlin: Springer-Verlag, 1970), pp. 303-343.
40. An early effort on go is described in W. Reitman and B Wilcox, “Pattern Recognition and Pattern Directed Inference in a Program for Playing Go,” in D. A. Waterman and F. Hayes-Roth, eds., Pattern-Directed Inference Systems.
41. As stated by John Laird, the cannibals and missionaries problem is, “Three cannibals and three missionaries want to cross a river. Though they can all row, they only have available a small boat that can hold two people. The difficulty is that the cannibals are unreliable: if they ever outnumber the missionaries on a river bank, they will kill them. How do they manage the boat trips so that all six get safely to the other side?”
42. A. Newell, J. C. Shaw, and H. A. Simon, “Empirical Explorations with the Logic Theory Machine” (1957), reprinted in E. A. Feigenbaum and J. Feldman, eds., Computers and Thought, pp. 109-133. The generalized results can be seen in A. Newell, J. C. Shaw and H. A. Simon, “A Report on a General Problem Solving Program,” Proceedings of the International Conference on Information Processing (UNESCO, Paris, 1959), pp. 256-264.
43. Notably from the Dreyfus brothers. See Hubert Dreyfus, What Computers Can’t Do, 2nd ed.
44. A. Newell and H. A. Simon, “GPS: A Program That Simulates Human Thought,” in E. A. Feigenbaum and J. Feldman, eds., Computers and Thought, pp. 71-105.
45. H. A. Simon and Allen Newell, “Heuristic Problem Solving: The Next Advance in Operations Research,” Operations Research. 6 (January-February 1958).
46. Some problems are described in Patrick H. Winston, Artificial Intelligence, pp. 146-154. The results and lessons of GPS are detailed in A. Newell and H. A. Simon, Human Problem Solving.
47. See E. Feigenbaum and Avron Barr, The Handbook of Artificial Intelligence, vol. 1, pp. 123-138.
48. An excellent paper on intelligence and computer chess is A. Newell, J. C. Shaw, and H. A. Simon, “Chess Playing Programs and the Problem of Complexity” (1958), reprinted in E. Feigenbaum and J. Feldman, Computers and Thought
49. Minsky’s views on intelligence serve us well here: Marvin Minsky, “Why People Think Computers Can’t,” Technology Review, November-December 1983, pp. 64-70.
50. Formally defined in Marvin Minsky and Seymour Papert, Perceptrons, p. 12.
51. W. S. McCulloch and W. Pitts, “A Logical Calculus of the Ideas Immanent in Neural Nets,” Bulletin of the Mathematical Biophysics 5 (1943).
52. Marvin Minsky and Seymour Papert, Perceptrons, pp. 136-150.
53. An excellent introductory article on the history and achievements of connectionism is Jerome Feldman, “Connections,” BYTE, April 1985, pp. 277-284.
54. This is reflected in the progress reports issued by the MIT AI Laboratory during that period. See, for instance, Marvin Minsky, Seymour Papert, “New Progress in Artificial Intelligence,” MIT Artificial Intelligence Laboratory, AI memo 252, 1972.
55. See Douglas Hofstadter, Metamagical Themas, pp. 274-292.
56. Widely applicable algorithms are likely to perform weakly in all their domains. See Seymour Papert, “One AI or Many,” Daedalus, Winter 1988.
57. A recent survey is in Jack D. Cowan and David. H. Sharp, “Neural Nets and Artificial Intelligence,” Daedalus, Winter 1988, pp. 85-121.
58. Important papers on recent work are put together in the standard reference in the field: D. E. Rumelhart, J. L. McClelland, and the PDP Research Group, Parallel Distributed Processing.
59. See Marvin Minsky, “Connectionist Models and Their Prospects,” in David Waltz, ed., Connectionist Models and Their Implications (Norwood, N.J.: Ablex Publishing, 1988).
60. This selection is carried out in the style of “summarizing” in the society theory. See Marvin Minsky, Society of Mind, p. 95.
61. Decision trees have been used extensively in Management Science. For an enjoyable introduction, see Howard Raiffa, Decision Analysis: Introductory Lectures (Reading, Mass.: Addison-Wesley).
62. This is a point well brought out in Marvin Minsky, “Why People Think Computers Can’t,” Technology Review, November-December 1983, pp. 64-70.
63. See R. C. Schank and R. Abelson, Scripts, Plans, Goals, and Understanding (Hillsdale, N.J.: Erlbaum, Lawrence Associates, 1977).
64. Marvin Minsky, “Plain Talk about Neurodevelopment Epistemology,” Proceedings of the Fifth International Joint Conference on AI (Cambridge, Mass., 1977). Minsky’s work culminated in a major book: Marvin Minsky, The Society of Mind.
65. Minsky, The Society of Mind, p. 17.
66. For a sketch of the society theory, see Marvin Minsky, “Society of Mind,” Artificial Intelligence Journal 1989.
67. For early related work, see Jerome Lettvin, H. Maturana, W. McCulloch and W. Pitts, “What the Frog’s Eye Tells the Frog’s Brain,” Proceedings of the IRE 47 (1959): 1940-1951. This famous paper is reprinted with other related papers in Warren S. McCulloch, Embodiments of Mind. Also see: W. S. McCulloch and W. Pitts, “A Logical Calculus of the Ideas Immanent in Neural Nets,” Bulletin of the Mathematical Biophysics 5 (1943), reprinted in Warren S. McCulloch, Embodiments of Mind.
68. Jerome Lettvin, H. Maturana, W. McCulloch and W. Pitts, “What the Frog’s Eye Tells the Frog’s Brain,” Proceedings of the IRE 47 (1959): 1940-1951.
69. John McDermott, “R1: A Rule-Based Configurer of Computer Systems,” Artificial Intelligence 19, no. 1 (1982). Also see John McDermott, “XSEL: A Computer Salesperson’s Assistant,” in J. Hayes, D. Michie, and Y. H. Pao, Machine Intelligence 10(New York: Halsted, Wiley, 1982).
70. P. H. Winston and K. A. Prendergast, eds., The AI Business, pp. 41-49, 92-99.
71. A strong case for the use of computers largely as office environment shapers is in Terry Winograd and Fernando Flores, Understanding Computers and Cognition: A New Foundation for Design.
72. See Seymour Papert, “One AI or Many?” Daedalus, Winter 1988, p. 7.
73. One large-scale effort that takes this problem seriously is described in D. Lenat, M. Shepherd, and M. Prakash, “CYC: Using Common Sense Knowledge to Overcome Brittleness and Knowledge Acquisition Bottlenecks,” AI Magazine, Winter 1986.
74. An enjoyable account of genetics, evolution, and intelligence is in Carl Sagan, The Dragons of Eden.
75. The original reports of Crick and Watson, surprisingly readable, may be found in James A. Peters, ed., Classic Papers in Genetics (Englewood Cliffs, N.J.: Prentice-Hall, 1959). An exciting account of the successes and failures that led to the double helix is given in J. D. Watson, The Double Helix.
76. The structure and behavior of DNA and RNA are described in: Felsenfeld Gary, “DNA,” Scientific American, October 1985. And: James Darnell, “RNA,” Scientific American, October 1985.
77. A fascinating account of the new biology is given in Horace F. Judson, The Eighth Day of Creation.
78. G. L. Stebbins and F. J. Ayala, “The Evolution of Darwinism,” Scientific American, July 1985, p. 73.
1. See J. David Bolter, Turing’s Man: Western Culture in the Computer Age, pp. 17-24. Bolter illustrates the mechanism for astronomical calculation described in detail in Derek J. de Solla Price, “An Ancient Greek Computer,” Scientific American, June 1959, pp. 60-67; see also Derek J. de Solla Price, Gears from the Greeks: The Antikythera Mechanism -A Calendar Computer from circa 80 B.C. Early automata and their relation to AI are discussed in Pamela McCorduck’s popular 1979 history of AI research, Machines Who Think, chapter 1. Another useful and lively source is John Cohen’s Human Robots in Myth and Science. Perhaps the best detailed sources on automata through the ages are Derek J. de Solla Price, “Automata and the Origins of Mechanism and Mechanistic Philosophy,” Technology and Culture 5 (1964): 9-23, and Silvio Bedini, “The Role of Automata in the History of Technology,” Technology and Culture 5 (1964): 24-42. A classic volume with many illustrations is Alfred Chapuis and Edmond Droz, Automata: A Historical and Technological Study, trans. Alec Reid. Otto Mayr describes the significance of automata in European culture in Authority, Liberty, and Automatic Machinery in Early Modern Europe.
2. For a general history of the mechanical arts see C. Singer, E. J. Holmyard, A. R. Hall, and T. I. Williams, eds., A History of Technology, and A. P. Usher, A History of Mechanical Inventions, 2nd ed. Those interested in ancient technologies should consult R. J. Forbes, Studies in Ancient Technology.
3. Price, “Automata and the Origins of Mechanism,” p. 11. Other important works on ancient technologies are A. G. Drachman, The Mechanical Technology of Greek and Roman Antiquity, A. P. Neuberger, The Technical Arts and Sciences of the Ancients, and K. D. White, Greek and Roman Technology.
4. Price, “Automata and the Origins of Mechanism,” p. 11. Joseph Needham describes the fascinating automata in China at the time of the pre-Socratics in his Science and Civilisation in China, vol. 2, pp. 53-54, 516. The Chinese mechanical orchestra, consisting of twelve figures cast in bronze, is also described in Needham’s Science and Civilisation in China., vol. 4, p. 158. Descartes, who was very interested in automata, described in one of his notebooks how to reproduce the pigeon of Archytas. See Mayr, Authority, Liberty, and Automatic Machinery, p. 63.
5. Far more on androids, see Samuel L. Macey, Clocks and the Cosmos: Time in Western Life and Thought; see also Carlo M. Cipolla, Clocks and Culture, 1300-1700, and David S. Landes, Revolution in Time: Clocks and the Making of the Modern World.
6. Bedini describes Torriano’s automaton in “Automata in the History of Technology,” p. 32, where it appears as figure 5. For more on P. Jacquet-Droz and Écrivain, see Bedini’s “Automata,” p. 39, and Macey’s Clocks and the Cosmos, pp. 210-211. P. Jacquet-Droz’s son, Henri-Louis, created a mechanical artist that drew flowers and a musician that played a clavecin. He also made a pair of artificial hands for a general’s son, who had lost his own hands in a hunting accident. Henri-Louis’s success in this venture was praised by the great creator of automata Jacques de Vaucanson (1709-1782). See the entries for Pierre-Jacquet Droz and Henri-Louis-Jacquet Droz in Nouvelle Biographie Générale, vol. 14 (Paris: Didot, 1868), pp. 812-813. Vaucanson was perhaps best known for his duck automaton, which ate, drank, chewed, and excreted. See Macey’s Clocks and the Cosmos, p. 210, and Bedini’s “Automata in the History of Technology,” pp. 36-37, which has a diagram of the duck’s inner mechanism (figures 11 and 12). Anyone interested in Vaucanson should see Michael Cardy, “Technology as Play: The Case of Vaucanson,” Stud. Voltaire 18th Cent. 241 (1986): 109-123. In 1726 Jonathan Swift described a machine that would automatically write books; see Eric A. Weiss, “Jonathan Swift’s Computing Machine,” Annals of the History of Computing7 (1985): 164-165.
7. Martin Gardner, “The Abacus: Primitive but Effective Digital Computer,” Scientific American 222 (1970): 124-127; Parry H. Moon, The Abacus: Its History, Its Design, Its Possibilities in the Modern World, J. M. Pullan, The History of the Abacus (London: Hutchinson, 1968).
8. Napier’s bones or rods are described and pictured in Stan Augarten’s Bit by Bit: An Illustrated History of Computers, pp. 9-10. A more detailed treatment can be found in M. R. Williams, “From Napier to Lucas: The Use of Napier’s Bones in Calculating Instruments,” Annals of the History of Computing 5 (1983): 279-296.
9. An earlier calculating machine was devised by the polymath Wilhelm Shickard (1592-1635). Shickard’s machine, and Pascal’s development of the Pascaline are described in Augarten’s Bit by Bit: An Illustrated History of Computers, pp. 15-30. A more technical account can be found in René Taton,”Sur l’invention de la machine arithmetique,” Revue d’histoire des sciences et de leurs applications 16 (1963): 139-160; Jeremy Bernstein, The Analytical Engine: Computers-Past, Present, and Future, p. 40; Herman Goldstine, The Computer from Pascal to von Neumann, p. 7-8.
10. Blaise Pascal, Pensées (New York: E. P. Dutton & Co., 1932), p. 96, no. 340.
11. The Pascaline, of which perhaps ten or fifteen were sold, failed to sell for a variety of reasons. See Augarten, Bit by Bit, pp. 27-30.
12. The Stepped Reckoner, as Leibniz called his machine, employed a special gear as a mechanical multiplier. See Augarten, Bit by Bit, pp. 30-35, and Goldstine, The Computer from Pascal to von Neumann, pp. 7-9. Morland’s career is described in Henry W. Dickinson’s biography, Sir Samuel Morland, Diplomat and Inventor, 1625-1695.
13. Brian Randell, ed., The Origins of Digital Computers: Selected Papers, p. 2.
14. Augarten, Bit by Bit, p. 89.
15. Babbage’s paper can be found in H. P. Babbage, Babbage’s Calculating Engines, pp. 220-222.
16. H. P. Babbage, Babbage’s Calculating Engines, pp. 223-224. On Babbage and the Astronomical Society, see Anthony Hyman, Charles Babbage: Pioneer of the Computer, pp. 50-53.
17. See chapter 2 of Augarten’s Bit by Bit, which has marvelous illustrations. Babbage’s life and career are treated in detail in Hyman’s Charles Babbage. Joel Shurkin provides a lively account of Babbage’s work in his Engines of the Mind: A History of the Computer, chapter 2. A biography recently published almost a century after its completion is H. W. Buxton, Memoirs of the Life and Labours of the Late Charles Babbage, Esq., F.R.S., ed. A. Hyman.
18. Allen G. Bromley, Introduction to H. P. Babbage, Babbage’s Calculating Engines, pp. xiii-xvi; Bernstein, The Analytical Engine, pp. 47-57.
19. Augarten, Bit by Bit, pp. 62-63; Bernstein, The Analytical Engine, p. 50; Hyman, Charles Babbage, p. 166.
20. Augarten, Bit by Bit, pp. 63-64. Babbage describes the features of his machine in “On the Mathematical Powers of the Calculating Engine,” written in 1837 and reprinted as appendix B in Hyman’s Charles Babbage.
21. A recent biography is Dorothy Stein, Ada, a Life and a Legacy.
22. Goldstine, The Computer, p. 26.
23. Her translation and notes can be found in H. P. Babbage, Babbage’s Calculating Engines, pp. 1-50.
24. The lonely end of Babbage’s life is described in Hyman, Charles Babbage, chapter 16.
25. Joel Shurkin, in Engines of the Mind, p. 104, describes Aiken’s machine as “an electromechanical Analytical Engine with IBM card handling.” For a concise history of the development of the Mark I, see Augarten’s Bit by Bit, pp. 103-107. I. Bernard Cohen provides a new perspective on Aiken’s relation to Babbage in his article “Babbage and Aiken,” Annals of the History of Computing 10 (1988): 171-193.
26. Anyone with a serious interest in the history of calculators should be aware of the following two classics: D. Baxandall, Calculating Machines and Instruments, and Ellice Martin Horsburgh, ed., Modern Instruments and Methods of Calculation: A Handbook of the Napier Tercentenary Celebration Exhibition. Some of the calculators and tabulating machines of the 1940s are described in Charles and Ray Eames, A Computer Perspective, pp. 128-159. A brief pictorial history of calculating machines can be found in George C. Chase, “History of Mechanical Computing Machinery,” Annals of the History of Computing 2 (1980): 198-226. Two important sources in the history of computing, besides the Annals, are N. Metropolis, J. Howlett, and Gian-Carlo Rota, eds., A History of Computing in the Twentieth Century, and Brian Randell, The Origins of Digital Computers.
27. See chapter 3 of Augarten’s Bit by Bit, and Eames’s A Computer Perspective, pp. 16-17, 22-30.
28. Augarten, Bit by Bit, pp. 78-83; Randell, Origins, p. 28.
29. Shurkin, Engines of the Mind, p. 94; Augarten, Bit by Bit, p. 82; Eames, A Computer Perspective, p. 39. Burroughs’s life and work are described in Molly Gleiser, “William S. Burroughs,” Computer Decisions, March 1978, pp. 34-36.
30. By 1913 the Burroughs Adding Machine Company had $8 million in sales, according to Augarten’s Bit by Bit, p. 82.
31. The legacy of the census crisis is described in detail in L. E. Truesdell, The Development of Punch Card Tabulation in the Bureau of the Census, 1890-1940 (Washington D.C.: Government Printing Office, 1965).
32. See Geoffrey D. Austrian’s biography, Herman Hollerith: Forgotten Giant of Information Processing, pp. 50-51. Shurkin, in Engines of the Mind, chapter 3, gives a very readable and concise account of Hollerith and his census work.
33. Austrian, Herman Hollerith, pp. 16-17, 51; Augarten, Bit by Bit, p. 75.
34. Austrian, Herman Hollerith, pp. 63-64.
35. Hollerith’s system for the 1890 census is similar to one he described in an 1889 article, “An Electric Tabulating System,” extracts from which are reprinted in Randell, Origins, pp 129-139. Also see Randell’s discussion of Hollerith’s work, pp. 125-126.
36. According to Augarten in Bit by Bit, p. 77, the Census Bureau was able to give a preliminary population total of 62,622,250 just six weeks after all the data arrived in Washington.
37. Austrian, Herman Hollerith, p. 153.
38. Shurkin, Engines of the Mind, pp. 78-82; Austrian, Herman Hollerith, chapter 13.
39. Austrian, Herman Hollerith, p. 176 ff.
40. See chapters 20 and 21 in Austrian, Herman Hollerith, as well as Shurkin, Engines of the Mind, p. 86.
41. Austrian, Herman Hollerith, p. 312.
42. Shurkin, Engines of the Mind, pp. 91-921; Augarten, Bit by Bit, pp. 177-178; Austrian, Herman Hollerith, pp. 329. Thomas Watson’s career is reviewed in Augarten, Bit by Bit, pp. 168ff.
43. Shurkin, Engines of the Mind, p. 92. See “The Rise of IBM,” chapter 25, in Austrian, Herman Hollerith, and “The Rise of IBM,” chapter 6, in Augarten’s Bit by Bit. See also Charles J. Bashe, Lyle R. Johnson, John H. Palmer, and Emerson W. Pugh, IBM‘s Early Computers.
44. Augarten, Bit by Bit, pp. 217-223. Shurkin examines the relations between IBM and its competitors in Engines of the Mind, pp. 260-279.
45. Aiken is quoted in Bernstein, The Analytical Engine, p. 62.
46. Bernstein, The Analytical Engine, p. 73.
1. The writings of these early thinkers are particularly insightful regarding what it means to compute. Some representative works are H. P. Babbage, “Babbage’s Analytical Engine,” Monthly Note of the Royal Astronomical Society 70 (1910): 517-526, 645. George Boole, An Investigation of the Laws of Thought on Which Are Founded the Mathematical Theories of Logic and Probabilities (Peru, III.: Open Court Publishing Co., 1952); Bertrand Russell, Principles of Mathematics, 2nd ed.; and H. Hollerith, “The Electric Tabulating Machine,” Journal of the Royal Statistical Society 57, no. 4 (1894): 678-682. For a detailed account of Burroughs’s contributions, scientific and commercial, see B. Morgan, Total to Date: The Evolution of the Adding Machine.
2. Zuse’s claim is supported by the patent applications he filed. See, for instance, K. Zuse, “Verfahren zur Selbst Atigen Durchfurung von Rechnungen mit Hilfe von Rechenmaschinen,” German Patent Application Z23624, April 11, 1936. Translated extracts, titled “Methods for Automatic Execution of Calculations with the Aid of Computers,” appear in Brian Randell, The Origins of Digital Computers, pp. 159-166.
3. From an interview with Computerworld magazine. Published in The History of Computing in 1981 by CW Communications, Framingham, Mass. The magazine’s interviewers were enterprising enough to locate Zuse in Hunfeld, Germany, (where he now lives) and produce an engaging interview.
4. Jan Lukasiewicz developed two related notations, each intended to ease certain aspects of representation and computation in mathematical logic. See Donald Knuth, The Art of Computer Programming, volume 1, Fundamental Algorithms, 2nd edition (Reading, Mass.: Addison-Wesley, 1973), p. 336.
5. A three page description of a special purpose electromechanical computer used to process flying bomb wing data is given in K. Zuse, “Rechengerate für Flugelvermessung,” private memorandum, September 10, 1969.
6. The charge is strongly made by Rex Malik in And Tomorrow… the World (London: Millington, 1975).
7. Paul Ceruzzi’s 1980 doctoral dissertation gives us a most detailed account of Zuse’s contributions to computer technology and places them in the proper context. Paul E. Ceruzzi, “The Prehistory of the Digital Computer, 1935-1945: A Cross-Cultural Study.” Texas Tech University, 1980.
8. Zuse’s own statement on his life and his computers (with many details of construction) appears in Konrad Zuse, Der Computer-Mein Lebenswerk (Berlin: Verlag Moderne Industrie, 1970). More recent reminiscences appear in Konrad Zuse, “Some Remarks on the History of Computers in Germany,” in N. Metropolis, J. Howlett, and G. C Rota, eds., A History of Computing in the Twentieth Century, pp. 611-628.
9. John E. Savage, Susan Magidson, and Alex M. Stein, The Mystical Machine, pp. 25-26.
10. See Andrew Hodges, Alan Turing: The Enigma. Hodges’s biography, now a standard reference on Turing’s life, gives an original account of Turing’s war-time computers.
11. For an engineering account of the Colossus project, see B. Randell, “The Colossus,” reprinted in N. Metropolis, J. Howlett, and G. C Rota, eds., A History of Computing in the Twentieth Century.
12. An excellent set of brief biographies of computer pioneers, including one of Aiken, may be found in Robert Slater, Portraits in Silicon.
13. See Andrew Hodges, Alan Turing: The Enigma.
14. See Cuthbert Hurd, “Computer Development at IBM,” in N. Metropolis, J. Howlett, and G. C. Rota, eds., A History of Computing in the Twentieth Century, pp. 389-418. IBM’s role in the development of these early computers is covered in Charles Bashe et al., IBM‘s Early Computers. This detailed book is successful in showing how exhausting an intellectual and physical effort it was to construct computers.
15. The History of Computing (Framingham, Mass.: CW Communications, 1981), p. 52.
16. Grace Hopper is brought out as a strong, dedicated, and inspiring intellectual in Slater’s biography in Robert Slater, Portraits in Silicon.
17. John E. Savage, Susan Magidson, and Alex M. Stein, The Mystical Machine, p. 30.
18. For a brief overview of the principles and construction of ENIAC and the lessons learned in the words of the designers themselves, see J. Presper Eckert, “The ENIAC,” and John W. Mauchly, “The ENIAC.” Both pieces appear in N. Metropolis, J. Howlett, and G. C Rota, eds., A History of Computing in the Twentieth Century, pp. 525-540, 541-550.
19. The court case brought out thousands of pages of material on early computers, valuable to the computer historian. Judge Larson’s findings are recorded in E. R. Larson, “Findings of Fact, Conclusion of Law, and Order for Judgement,” File no. 4-67, Civ. 138, Honeywell Inc. vs. Sperry Rand Corp. and Illinois Scientific Development, Inc., U.S. District Court, District of Minnesota, Fourth Division, October 19, 1973.
20. A description of the machine and its applications is given in J. V. Atanasoff, “Computing Machine for the Solution of Large Systems of Linear Algebraic Equations,” Ames, Iowa: Iowa State College, 1940. Reprinted in Brian Randell, ed., The Origins of Digital Computers: Selected Papers (Berlin: Springer-Verlag, 1973), pp. 305-325.
21. The concept of a stored program has proved to be one of the most robust in computer science. For a history of its development and implementation, and also for a clear analysis of the ENIAC experience, see Arthur Burks, “From ENIAC to the Stored Program: Two Revolutions in Computers,” in N. Metropolis, J. Howlett, G. C Rota, eds., A History of Computing in the Twentieth Century, pp. 311-344.
22. For a lucid explanation of the stored-program idea, see John E. Savage, Susan Magidson, and Alex M. Stein, The Mystical Machine, pp. 31-32, 58-62.
23. The excitement of these developments is skillfully captured in Wilkes’s autobiography: Maurice Wilkes, Memoirs of a Computer Pioneer (Cambridge: MIT Press, 1981).
24. For the role of research and development in the rise of IBM, see Charles Bashe et al., IBM‘s Early Computers,
25. Alan M. Turing, “Computing Machinery and Intelligence,” Mind 59 (1950): 433-460.
26. Pamela McCorduck, Machines Who Think (San Francisco: W. H. Freeman, 1979), pp. 93-102.
27. Von Neumann stressed the differences between the nervous system and the computer in “The General and Logical Theory of Automata,” in L. A. Jeffress, ed., Cerebral Mechanisms in Behavior (New York: John Wiley & Sons, 1951). He fails to see how these two can be made to be functionally equivalent.
28. A book was published posthumously, however: J. von Neumann, The Computer and the Brain (New Haven: Yale University Press, 1958).
29. Norbert Wiener, Cybernetics (Cambridge: MIT Press, 1943).
30. Wiener is a delightful writer: the best biographies of him are perhaps his own. See Norbert Wiener, Ex-Prodigy (Cambridge: MIT Press, 1963) and Norbert Wiener, I Am a Mathematician (Boston: Houghton-Mifflin, 1964).
31. Wiener liked to believe that the medium underlying life was not energy but information. For an account of how this motivated many of Wiener’s projects, see the excellent biography Steve Heims, John von Neumann and Norbert Wiener: From Mathematics to the Technologies of Life and Death (Cambridge: MIT Press, 1980).
32. Many of Fredkin’s results come from studying his own model of computation, which explicitly reflects a number of fundamental principles of physics. See the classic Edward Fredkin and Tommaso Toffoli, “Conservative Logic,” International Journal of Theoretical Physics 21, nos. 3-4 (1982).
33. A set of concerns about the physics of computation analytically similar to those of Fredkin’s may be found in Norman Margolus, “Physics and Computation,” Ph.D. thesis, MIT.
34. In his provocative book The Coming of Postindustrial Society, Harvard sociologist Daniel Bell introduces the idea that the codification of knowledge is becoming central to society. In The Fifth Generation (Reading, Mass.: Addison-Wesley, 1983), Edward Feigenbaum and Pamela McCorduck discuss the impending reality of such a society.
35. See Norbert Wiener, Cybernetics.
36. The Differential Analyzer and other such analog computing machines are described in chapter 5 of Michael Williams, A History of Computing Technology (Englewood Cliffs, N.J.: Prentice-Hall, 1985). Bush’s own account of the computer is presented in “The Differential Analyzer,” Journal of the Franklin Institute 212, no. 4 (1936): 447-488.
37. The drawbacks of analog computers are considered in chapter 5 of Michael Williams, History of Computing Technology (Englewood Cliffs, N.J.: Prentice-Hall, 1985).
38. See Norbert Wiener, Cybernetics.
39. Such trends are fast paced. See Tom Forester, High Tech Society (Cambridge: MIT Press, 1987).
40. A clear account of the technology behind the compact disk appears in John J. Simon, “‘From Sand to Circuits’ and other Enquiries,” Harvard University Office of Information Technology, 1986.
41. See John J. Simon, “‘From Sand to Circuits’ and Other Enquiries,” Harvard University Office of Information Technology, 1986.
42. The structure of the transistor is explained in Stephen Senturia and Bruce Wedlock, Electronic Circuits and Applications (New York: McGraw-Hill, 1983).
43. See Claude Shannon and Warren Weaver, The Mathematical Theory of Communication (Urbana, III.: University of Illinois Press, 1964).
44. In recent years very interesting work has been done to find out how the brain processes perceptual inputs. For a sample of current thinking in the area, see Ellen Hildreth and Christof Koch, “The Analysis of Visual Motion: From Computational Theory to Neural Mechanisms,” MIT Artificial Intelligence Laboratory, AI memo no. 919, 1986.
45. A clear and technically accurate piece on the revolution in music brought about by the representation of music in digital forms is presented in Understanding Computers: Input, Output, (Alexandria, Va.: Time-Life Books, 1986).
46. Haugeland clarifies many issues by attempting to formalize our intuitions. What does it mean when we say that the mind is a computer? asks Haugeland in Artificial Intelligence: The Very Idea.
47. Alan Turing, “Computing Machinery and Intelligence,” Mind 59 (1950): 433-460 (reprinted in E. Feigenbaum and J. Feldman, Computers and Thought). Norbert Wiener, Cybernetics, or Control and Communication in the Animal and Machine. Warren McCulloch and Walter Pitts, “A Logical Calculus of the Ideas Immanent in Logical Activity,” Bulletin of Mathematical Biophysics, 5 (1943): 115-137. Claude Shannon, “Programming a Digital Computer for Playing Chess,” Philosophy Magazine 41 (1950): 356-375. A related paper, more amenable to the layperson, is “Automatic Chess Player,” Scientific American, October 1950, p. 48.
48. See A. Newell, J. C. Shaw, and H. A. Simon, “Programming the Logic Theory Machine,” Proceedings of the Western Joint Computer Conference, 1957, pp. 230-240.
49. See A. Newell, J. C. Shaw, and H. A. Simon, “Empirical Explorations of the Logic Theory Machine,” Proceedings of the Western Joint Computer Conference, 1957, pp. 218-239.
50. The broad techniques of the Logic Theory Machine were generalized in GPS. This is described in A. Newell, J. C. Shaw, and H. A. Simon, “Report on a General Problem-Solving Program,” reprinted in E. Feigenbaum and J. Feldman, eds., Computers and Thought. Newell and Simon continued their studies and summarized their results in Human Problem Solving, which placed less emphasis on the actual computer implementation of their ideas.
51. A. Newell and H. A. Simon, “Heuristic Problem Solving: The Next Advance in Operations Research,” Journal of the Operations Research Society of America 6, no. 1 (1958), reprinted in Herbert Simon, Models of Bounded Rationality, vol. 1, Economic Analysis and Public Policy (Cambridge MIT Press, 1982).
52. Notably the Dreyfus brothers. See Hubert Dreyfus, What Computers Can’t Do, 2nd ed.
53. Indeed, the prediction about chess has not yet come true. The Fredkin Prize will go to the first computer to become world chess champion. Samuel’s checker program was not written specifically as a game-playing program but as an exercise in machine learning. See Arthur L. Samuel, “Some Studies in Machine Learning Using the Game of Checkers,” reprinted in E. A. Feigenbaum and J. Feldman, eds., Computers and Thought, pp. 279-293.
54. McCorduck’s delightful book on the history of artificial intelligence, Machines Who Think, contains a chapter on the now famous Dartmouth Conference.
55. See Edward Feigenbaum’s short reflection on twenty-five years of artificial intelligence: “AAAI President’s Message,” AI Magazine, Winter 1980-1981.
56. The version most referred to is Marvin Minsky, “Steps toward Artificial Intelligence,” in E. A. Feigenbaum and J. Feldman, eds., Computers and Thought, pp. 406-450.
57. LISP was originally introduced in a set of memos at the MIT Artificial Intelligence Laboratory. Much of this found its way into formal publications. See John McCarthy, “Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I,” Communications of the ACM 3, no. 4 (1960). The language soon became popular enough for McCarthy to publish a manual: John McCarthy, P. W. Abrahams, D. J. Edwards, T. P. Hart, and M. I. Levin, LISP 1.5 Programmer’s Manual (Cambridge: MIT Press, 1962). See Pamela McCorduck, Machines Who Think, pp. 97-102.
58. Daniel Bobrow, “Natural Language Input for a Computer Problem Solving System,” in Marvin Minsky, Semantic Information Processing, pp. 146-226.
59. Thomas Evans, “A Program for the Solution of Geometric-Analogy Intelligence Test Questions,” in Marvin Minsky, Semantic Information Processing, pp. 271-353.
60. This work is described in R. Greenblatt, D. Eastlake, and S. Crocker, “The Greenblatt Chess Program,” MIT Artificial Intelligence Laboratory, AI memo 174, 1968. The program defeated Hubert Dreyfus, who once strongly doubted that a chess program could match even an amateur human player.
61. The lessons of DENDRAL are recorded and analyzed in Robert Lindsay, Bruce Buchanan, Edward Feigenbaum, and Joshua Lederberg, Applications of Artificial Intelligence for Chemical Inference: The DENDRAL Project (New York: McGraw-Hill, 1980). A brief and clear explanation of the essential mechanisms behind DENDRAL is given in Patrick Winston, Artificial Intelligence (1984), pp. 163-164, 195-197.
62. Much has been written about ELIZA, but the clearest account on how ELIZA works is from Weizenbaum himself: “ELIZA-A Computer Program for the Study of Natural Language Communication between Man and Machine,” Communications of the ACM 9 (1966): 36-45. ELIZA has, of course, attracted numerous criticisms, many of which were first voiced by Weizenbaum himself. See.Hubert Dreyfus, What Computers Can’t Do.
63. For many years SHRDLU was cited as a prominent accomplishment of artificial intelligence. Winograd’s thesis has been published in book form: Understanding Natural Language (New York: Academic Press, 1972). A brief version appears as “A Procedural Model of Thought and Language,” in Roger Schank and Kenneth Colby, eds., Computer Models of Thought and Language (San Francisco: W. H. Freeman, 1973).
64. Minsky and Papert point out that these toy examples offer many important abstractions for further analysis. See Marvin Minsky and Seymour Papert, “Artificial Intelligence Progress Report,” MIT Artificial Intelligence Laboratory, AI memo 252, 1973.
65. Warren McCulloch and Walter Pitts, “A Logical Calculus of the Ideas Immanent in Logical Activity,” Bulletin of Mathematical Biophysics 5 (1943): 115-137. Rosenblatt’s classic work is Principles of Neurodynamics (New York: Spartan Books, 1962).
66. Minsky and Papert trace much of this controversy and history, with technical details, in the prologue and epilogue of the revised edition of their book, published in 1988.
67. This trend is explained and praised in Edward Feigenbaum, “The Art of Artificial Intelligence: Themes and Case Studies in Knowledge Engineering,” Fifth International Joint Conference on Artificial Intelligence, 1977.
68. The approach was compelling in light of what it could do. See the papers in E. Feigenbaum and J. Feldman, Computer and Thought.
69. Knowledge representation was and continues to be an important area of artificial intelligence research. See R. Brachman and H. Levesque, eds., Readings in Knowledge Representation (Los Altos, Calif.: Morgan Kaufman, 1986).
70. The restaurant scene is a popular example of scripts as a means of representing knowledge. Scripts are brought out as a powerful scheme for reasoning in R. Schank and R. Abelson, Scripts, Plans, Goals, and Understanding (Hillsdale, N.J.: Lawrence Erlbaum Associates, 1977).
71. Minsky’s work on frames is one of the most cited in AI. The most complete written form of the theory is Marvin Minsky, “A Framework for Representing Knowledge,” MIT Artificial Intelligence Laboratory, AI memo 304, 1974.
72. See R. Schank and R. Abelson, Scripts, Plans, Goals, and Understanding (Hillsdale, N.J.: Lawrence Erlbaum Associates, 1977).
73. An excellent introduction to the technology and applications of expert systems is F. Hayes-Roth, D. A. Waterman, and D. B. Lenat, eds., Building Expert Systems (Reading, Mass.: Addison-Wesley, 1983).
74. Some famous expert systems are described by the creators themselves in F. Hayes-Roth, D. A. Waterman, and D. B. Lenat, eds., Building Expert Systems (Reading, Mass.: Addison-Wesley, 1983).
75. See Edward Feigenbaum and Pamela McCorduck, The Fifth Generation.
76. Artificial intelligence is beginning to have an important effect on the productivity of many organizations. This phenomena is explored in Edward Feigenbaum, Pamela McCorduck, and Penny Nii, The Rise of the Expert Company (Reading, Mass.: Addison-Wesley, 1989).
Pattern Recognition: The Search for Order
1. An excellent treatment of the role of imagery and “holistic” representations in cognition may be found in Ned Block, ed., Imagery (Cambridge: MIT Press, 1981).
2. See Newell and Simon’s analysis of human chess playing in Allen Newell and Herbert Simon, Human Problem Solving (Englewood Cliffs, N.J.: Prentice-Hall, 1972).
3. An essay of special relevance to the discussion here is Zenon Pylyshyn, “Imagery and Artificial Intelligence,” in C. W. Savage, ed., Perception and Cognition: Issues in the Foundations of Psychology, Minnesota Studies in the Philosophy of Science, vol. 9 (Minneapolis: University of Minnesota Press, 1978).
4. Imagination is a skill that we develop with age. Piaget’s experiments show that to the infant (up to a certain age), an object that is not visible does not exist. See J. Piaget, Play, Dreams, and Imitation in Childhood (New York: W. W. Norton, 1951).
5. This technique is simple but surprisingly powerful and has been used extensively in AI programs. See Patrick H. Winston, Artificial Intelligence, pp. 159-167.
6. A clear introduction to the essential problems and procedures in machine vision appears in chapter 10 of the classic textbook Patrick H. Winston, Artificial Intelligence.
7. This and other techniques for identifying edges are reviewed in L. Davis, “A Survey of Edge Detection Techniques,” Computer Graphics and Image Processing 4 (1975): 248-270. A more detailed review appears in Azriel Rosenfeld and Avinash Kak, Digital Picture Processing (New York: Academic Press, 1976). A more recent summary of results, including John Canny’s work, is Ellen Hildreth, “Edge Detection,” MIT Artificial Intelligence Laboratory, AI memo 858, 1985.
8. The use of zero crossings in stereo to isolate edges was introduced in David Marr and Tomaso Poggio, “A Theory of Human Stereo Vision,” Proceedings of the Royal Society of London 204 (1979). The use of zero crossings was also addressed in Ellen Hildreth’s work: “The Detection of Intensity Changes by Computer and Biological Vision Systems,” Computer Vision, Graphics, and Image Processing 23 (1979). For efficiencies more recently incorporated, see John Canny, “Finding Edges and Lines in Images,” MIT Artificial Intelligence Laboratory, technical report 720, 1983.
9. False hypothesis may be corrected also by some of the techniques detailed in L. S. Davis, “A Survey of Edge Detection Techniques,” Computer Graphics and Image Processing 4 (1975): 248-270. Also see Ellen Hildreth, “Edge Detection,” MIT Artificial Intelligence Laboratory, AI memo 858, 1985.
10. Hubel and Wiesel are responsible for many important aspects of our knowledge today about the biological mechanisms for vision. They conducted many imaginative experiments to reveal the structure and functional decomposition of the cortex. Notable is their discovery of the presence of edge detection neurons. See D. H. Hubel and T. N. Wiesel, “Functional Architecture of Macaque Monkey Visual Cortex,” Journal of Physiology 195 (1968): 215-242. A truly fascinating book written for the layperson as an introduction to the brain’s vision processing is David Hubel, Eye, Brain, and Vision.
11. For details of the computational aspects of recovering details of surfaces from images by means of sombrero filtering and other related techniques, see W. Eric L. Grimson, From Images to Surfaces (Cambridge: MIT Press, 1981).
12. An illuminating article on the eye’s computational capacities for image processing is Tomaso Poggio, “Vision by Man and Machine,” Scientific American, April 1984.
13. See Tomaso Poggio, “Vision by Man and Machine,” Scientific American, April 1984.
14. David Marr is brilliant at fusing studies from biology and machine vision. His highly influential classic, published posthumously, is Vision. A paper that excellently summarizes and demonstrates his computational approach to vision is D. Marr and H. K. Nishihara, “Visual Information Processing: Artificial Intelligence and the Sensorium of Sight,” Technology Review, October 1978.
15. See Tomaso Poggio, “Vision by Man and Machine” Scientific American, April 1984.
16. The geometry of stereopsis and stereo vision is discussed well in S. T. Barnard and M. A. Fischler, “Computational Stereo from an IU Perspective,” Proceedings of the Image Understanding Workshop, 1981.
17. Edges introduce constraints that greatly reduce the number of ways two images can be fused. Without such preprocessing, matching would be extremely difficult. Consider, for example, the computational complexity of fusing random-dot stereograms. See David Marr, Vision, p. 9.
18. For the details of these techniques, see R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis (New York: Wiley, 1973).
19. See W. Eric L. Crimson, From Images to Surfaces. Object recognition and labeling is a hard problem. For the role of knowledge and preconceived models in this process, see Rodney Brooks, “Model-Based Three-Dimensional Interpretation of Two-Dimensional Images,” Proceedings of the Seventh International Joint Conference on Artificial Intelligence, 1981. Generalized cylinders are frequently used as intermediate representations of objects. See D. Marr and H. K. Nishihara, “Visual Information Processing: Artificial Intelligence and the Sensorium of Sight,” Technology Review, October 1978.
20. The parallel nature of the computational processes constituting early vision is examined in an excellent review article: Tomaso Poggio, Vincent Torre, and Christof Koch, “Computational Vision and Regularization Theory,” Nature, September 26, 1985. The role of analog computations is also discussed there.
21. For some lessons that evolution offers for strategies in artificial intelligence, see Rodney Brooks, “Intelligence without Representation,” Artificial Intelligence, 1989.
22. This point is brought out with particular elegance in Dana Ballard and Christopher Brown, “Vision: Biology Challenges Technology,” BYTE, April 1985.
23. The structure of the Connection Machine is excellently described, along with some machine vision applications, in W Daniel Hillis, “The Connection Machine ” Scientific American, June 1987.
24. The role of analog computations in vision is discussed in Tomaso Poggio, Vincent Torte, and Christof Koch, “Computational Vision and Regularization Theory,” Nature, September 26, 1985.
25. Neural networks and related mechanisms have been applied fairly successfully in vision problems. For work in early vision, see D. H. Ballard, “Parameter Nets: Toward a Theory of Low-Level Vision,” Artificial Intelligence Journal 22 (1984): 235-267. For higher-level processes, see D. Sabbah, “Computing with Connections in Visual Recognition of Origami Objects,” Cognitive Science 9 (1985): 25-50.
26. For instance, early neural networks failed to determine connectedness of drawings. The ability of more complex neural nets to determine connectedness remains controversial. See Marvin Minsky and Seymour Papert, Perceptrons, pp. 136-150.
27. The manifesto of the new connectionists is Parallel Distributed Processing, vols. 1 and 2, by David Rumelhart, James McClelland, and the PDP Research Group. Chapter 2 of this book describes the new neural-net structures.
28. Marvin Minsky and Seymour Papert, Perceptrons, revised ed., p. vii.
29. Distributed systems, whose mechanisms and memory are stored not centrally but over a large space, are less prone to catastrophic degradation. Neural networks are not only parallel but also distributed systems. See chapter 1 of D. E. Rumelhart, J. L. McClelland, and the PDP Research Group, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1.
30. An excellent study of skill acquisition with some implications for parallel distributed processing is D. E. Rumelhart and D. A. Norman, “Simulating a Skilled Typist: A Study of Skilled Cognitive-Motor Performance,” Institute for Cognitive Science, technical report 8102, University of California, San Diego, 1981.
31. This objection is articulated in Hubert Dreyfus and Stuart Dreyfus, Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer (New York: The Free Press, 1986), pp. 101-121.
32. Daniel Hillis points out that for many physical systems that are inherently parallel, fluid flow, for example, it is simply not convenient to think in terms of sequential processes. Similarly, logic turns out to be inconvenient for the analysis of, say, early-vision processes. See W. Daniel Hillis, “The Connection Machine,” Scientific American, June 1987.
33. This multilevel, multiparadigm approach is followed in the society theory of the mind. See Marvin Minsky, The Society of Mind.
34. Higher-level descriptions have a smaller volume of information, but they incorporate a larger number of constraints and require more extensive knowledge about the physical world. See chapter 1 in David Marr, Vision.
35. See the appendix of Marvin Minsky, The Society of Mind.
36. For recent developments in the design and fabrication of chips, see J. D. Meindl, “Chips for Advanced Computing,” Scientific American, October 1987.
37. David Marr and Tomaso Poggio, “Cooperative Computation of Stereo Disparity,” Science 194 (1976): 283-287.
38. See David Marr and Tomaso Poggio, “From Understanding Computation to Understanding Neural Circuitry,” Proceedings of the Royal Society of London, 1977, pp. 470-488.
39. Daniel Hillis’s thesis suggests areas where parallelism ought to be exploited. See W. Daniel Hillis, The Connection Machine (Cambridge: MIT Press, 1985).
40. David Marr is responsible for these important representations for vision processing. All three are clearly explained in D. Marr and H. K. Nishihara, “Visual Information Processing: Artificial Intelligence and the Sensorium of Sight,” Technology Review, October 1978.
41. Segmentation was one of the chief concerns in the construction of the Hearsay speech-recognition system. The problem was resolved in part by using multiple knowledge sources and multiple experts. See L. Erman, F. Hayes-Roth, V. Lesser, and D. Raj Reddy, “The HEARSAY-II Speech Understanding System: Integrating Knowledge to Resolve Uncertainty,” Computing Surveys 12, no. 2 (1980): 213-253.
42. See L. Erman, F. Hayes-Roth, V. Lesser, and D. Raj Reddy, “The HEARSAY-II Speech Understanding System: Integrating Knowledge to Resolve Uncertainty,” Computing Surveys 12, no. 2 (7980): 213-253.
43. The early years of artificial intelligence saw a lot of work on character recognition. But researchers could not perform extensive experiments on their programs because of a lack of computer power. See W. W. Bledsoe and I. Browning, “Pattern Recognition and Reading by Machine,” Proceedings of the Eastern Joint Computer Conference, 1959. A more general article is Oliver Selfridge and U. Neisser, “Pattern Recognition by Machine,” Scientific American, March 1960, 60-68.
44. The Hearsay system has an interesting implementation of such a manager. See L. Erman, F. HayesRoth, V. Lesser, and D. Raj Reddy, “The HEARSAY-II Speech Understanding System: Integrating Knowledge to Resolve Uncertainty.” Computing Surveys 12, no. 2 (1980): 213-253.
45. For some interesting points on the use of multiple experts, see Douglas Lenat, “Computer Software for Intelligent Systems,” Scientific American, September 1984.
46. For basic techniques for template matching, see R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis (New York: Wiley, 1973).
47. A playful treatment of the nature of fonts and type styles is presented in chapter 13 of Douglas Hofstadter, Metamagical Themas.
48. This set of paradigms is successfully applied in the Hearsay system in the context of speech recognition. See L. Erman, F. Hayes-Roth, V. Lesser, and D. Raj Reddy, “The HEARSAY-II Speech Understanding System: Integrating Knowledge to Resolve Uncertainty,” Computing Surveys 12, no. 2 (1980): 213-253.
49. For details of the project, see Tomaso Poggio and staff, “MIT Progress in Understanding Images,” Proceedings of the Image Understanding Workshop (Cambridge, Mass., 1988), pp. 1-16.
50. The project tries to incorporate what we know about the nature of vision computation in the brain, an issue treated in Tomaso Poggio, Vincent Torre, and Christof Koch, “Computational Vision and Regularization Theory,” Nature, September 26, 1985.
51. See T. Poggio, J. Little, et al., “The MIT Vision Machine,” Proceedings of the Image Understanding Workshop (Cambridge, Mass., 1988), pp. 177-198.
52. For related work, see Anya Hurlbert and Tomaso Poggio, “Making Machines (and Artificial Intelligence),” Daedalus, Winter 1988.
53. The Terregator and some other projects of the robotics group at Carnegie-Mellon University are described in Eric Lerner, “Robotics: The Birth of a New Vision,” Science Digest, July 1985.
54. An informative article on Carver Mead and his specialized chips for vision processing is: Andrew Pollack, “Chips that Emulate the Function of the Retina,” New York Times, August 26, 1987, p. D6.
55. Harry Newquist, ed., AI Trends ’87: A Comprehensive Annual Report on the Artificial Intelligence Industry (Scottsdale, Ariz.: DM Data, 1987).
56. See “Technology Aiding in Fingerprint Identification, U.S. Reports,” New York Times, May 4, 1987, p. A20.
57. For a description of these new products, see Wesley Iversen, “Fingerprint Reader Restricts Access to Terminals and PCs,” Electronics, June 11, 1987, p. 104.
58. A very good review paper on AI vision systems and their industrial applications is Michael Brady, “Intelligent Vision,” in W. Eric Grimson and Ramesh Patil, eds., AI in the 1980s and Beyond.
59. Harry Newquist, ed., AI Trends ’87: A Comprehensive Annual Report on the Artificial Intelligence Industry (Scottsdale, Ariz.: DM Data, 1987).
60. A substantive article on the role of intelligent systems in modern warfare is J. Franklin, Laura Davis, Randall Shumaker, and Paul Morawski, “Military Applications,” in Stuart Shapiro ed., Encyclopedia of Artificial Intelligence, vol. 1 (New York: John Wiley & Sons, 1987).
61. The intelligence of remotely piloted aircraft offers great possibilities, as one can see from Peter Gwynne, “Remotely Piloted Vehicles Join the Service,” High Technology, January 1987, pp. 38-43.
62. Expert systems, pattern recognition and other kinds of medical-information systems will become increasingly utilized in medicine. See Glenn Rennels and Edward Shortliffe, “Advanced Computing for Medicine,” Scientific American, October 1987.
63. A beautifully illustrated article on medical imaging technology is Howard Sochurek, “Medicine’s New Vision,” National Geographic, January 1987, pp. 2-41.
64. Schwartz’s proposal created excitement in the art world. Her research is described in Lillian Schwartz, “Leonardo’s Mona Lisa,” Arts and Antiques, January 1987. A briefer, more technical description appears in the following book on computers and art. Cynthia Goodman, Digital Visions (New York: H. N. Abrams, 1987), pp. 41-43.
65. To J. B. Watson, the founder of behaviorism in America, thinking was like talking to oneself. He attached great importance to the small movements of the tongue and larynx when one is thinking. See J. B. Watson, Behaviorism (New York: Norton, 1925).
66. Viewers of the film My Fair Lady will recall that the anatomy of speech production is an important topic for phoneticians. See M. Kenstowicz and C. Kissebereth, Generative Phonology: Description and Theory (New York: Academic Press, 1979) and P. Ladefoged, A Course in Phonetics, 2nd ed. (New York: Harcourt Brace Jovanovich, 1982).
67. The distribution of sound is particular to each language. An important study on English is N. Chomsky and M. Halle, The Sound Pattern of English (New York: Harper & Row, 1968).
68. Some problems and procedures for early auditory processing are presented in S. Seneff, “Pitch and Spectral Analysis of Speech Based on an Auditory Perspective,” Ph.D. thesis, MIT Dept. of Electrical Engineering, 1985.
69. This issue is covered in J. S. Perkell and D. H. Klatt., eds., Variability and Invariance in Speech Processes (Hillsdale, N.J.: Erlbaum, Lawrence Associates, 1985).
70. H. Sakoe and S. Chita, “A Dynamic-Programming Approach to Continuous Speech Recognition,” Proceedings of the International Congress of Acoustics, Budapest, Hungary, 1971, pp. 206-213.
71. This approach is faithfully followed in the construction of the Hearsay speech-recognition system. See L. Erman, F. Hayes-Roth, V. Lesser, and D. Raj Reddy, “The HEARSAY-II Speech Understanding System: Integrating Knowledge to Resolve Uncertainty,” Computing Surveys 12, no. 2 (1980): 213-253.
72. A comprehensive review of ASR is Victor Zue, “Automated Speech Recognition,” in W. Eric L. Grimson and Ramesh Patil, eds., AI in the 1980s and Beyond.
73. See Harry Newquist, ed., AI Trends ’87: A Comprehensive Annual Report on the Artificial Intelligence Industry (Scottsdale, Ariz.: DM Data, 1987).
74. See the fascinating cover stories on computers and music in the June 1986 issue of BYTE.
The Search for Knowledge
1. These and other intriguing aspects of memory are discussed in chapter 8 in Marvin Minsky, Society of Mind.
2. The most complete written form of the frame theory is Marvin Minsky, “A Framework for Representing Knowledge,” MIT Artificial Intelligence Laboratory, AI memo 306. Other, less technical versions have appeared since. See Marvin Minsky, “A Framework for Representing Knowledge,” in John Haugeland, ed., Mind Design.
3. A brief description of the classification systems currently followed is given in Classification: A Beginner’s Guide to Some of the Systems of Biological Classification in Use Today, British Museum (Natural History), London, 1983.
4. The successes and limitations of these systems are discussed in the excellent book Lynn Margulus and Karlene Schwartz, Five Kingdoms: An Illustrated Guide to the Phyla of Life on Earth, 2nd ed. (New York: W. H. Freeman, 1988).
5. Dewey first published his classic work anonymously under the title, “A Classification and Subject Title.” Many editions have appeared since, because the Dewey system has grown to meet every challenge of the world’s libraries. See Melvil Dewey, Dewey Decimal Classification and Relative Index: Devised by Melvil Dewey, 19th ed., edited under the direction of Benjamin Custer (Albany, N.Y.: Forest Press, 1979).
6. Ross Quillian is generally credited with developing semantic networks as a knowledge representation for AI systems. Although he introduced this representation as early as 1963, the standard reference for his work in this area is M. Ross Quillian, “Semantic Memory,” in Marvin Minsky, Semantic Information Processing (1968).
7. See Patrick H. Winston, “Learning Structural Descriptions from Examples,” in Patrick H. Winston, The Psychology of Computer Vision (New York: McGraw-Hill, 1975).
8. Some of the psychological realities behind semantic networks are discussed in M. Ross Quillian, “Semantic Memory,” in Marvin Minsky, Semantic Information Processing.
9. Some interesting explanations of cognitive dissonance are given in Henry Gleitman, Psychology, 2nd ed. (New York: W. W. Norton & Co., 1986), pp. 374-376.
10. An excellent book on the influence of media on political thinking is Edwin Diamond and Stephen Bates, The Spot: The Rise of Political Advertising on Television (Cambridge: MIT Press, 1984).
11. A revealing book on the psychological aspects of advertising today is William Meyers, The Image Makers: Power and Persuasion on Madison Avenue (New York: Times Books, 1984).
12. Much research has been done in recent years on the mechanisms for computation and memory in the human brain, particularly since any new knowledge could contribute significantly to the debate on connectionism. An introductory account is in Paul M. Churchland, Matter and Consciousness, revised edition.
13. This last point is strongly brought out by Roger Schank and Peter Childers in The Creative Attitude (New York: Macmillan, 1988).
14. That computers can never be creative has long been an argument against the possibility of artificial intelligence. A short rebuttal and an examination of what it means to be creative appears as part of Marvin Minsky, “Why People Think Computers Can’t” A1 Magazine 3, no. 4 (Fall 1982).
15. D. Raj Reddy, Foundations and Grand Challenges of Artificial Intelligence, forthcoming. For a similar analysis of the brain’s processing capabilities, see J. A. Feldman and D. H. Ballard, “Connectionist Models and Their Properties,” Cognitive Science 6 (1982): 205-254.
16. Work is being done to allow intelligent systems to exploit past experiences instead of relying solely on the deep analysis of the current situation. For an example, see Craig Stanfill and David Waltz, “Toward Memory Based Reasoning,” Communications of the ACM 29, no. 12 (1986).
17. See Craig Stanfill and David Waltz, “Toward Memory Based Reasoning,” Communications of the ACM29, no. 12 (1986).
18. Human chess-playing and computer chess are analyzed for similarities and differences in Eliot Hearst, “Man and Machine: Chess Achievements and Chess Thinking,” in Peter Frey, ed., Chess Skill in Man and Machine, 2nd ed., 1983.
19. See Eliot Hearst, “Man and Machine.”
20. Newell and his associates maintain that much of learning is reorganization of certain memories into efficient “chunks.” See John E. Laird, P. Rosenbloom, and Allen Newell, “Toward Chunking as a General Learning Mechanism,” Proceedings of the National Conference of the American Association for Artificial Intelligence, Austin, Tex, 1984.
21. To see how chunking fits into the SOAR view of cognition and intelligence, see John E. Laird, P. Rosenbloom, and Allen Newell, “SOAR: An Architecture for General Intelligence,” Artificial Intelligence Journa133 (1987): 1-64.
22. An excellent introductory book on the structure and design of expert systems, with contributions from many figures notable for their work in this area, is Frederick Hayes-Roth, D. A. Waterman, and D. B. Lenat, eds., Building Expert Systems.
23. XCON, once called R1, was jointly developed by Carnegie-Mellon University and Digital Equipment Corporation (DEC). See J. McDermott, “R1: A Rule-Based Configurer of Computer Systems,” Artificial Intelligence Journal 19, no. 1 (1982).
24. For a DEC view of its experiences with XCON, see Arnold Kraft, “XCON: An Expert Configuration System at Digital Equipment Corporation,” in Patrick Winston and Karen Prendergast, eds., The AI Business (Cambridge: MIT Press, 1984).
25. Many techniques were introduced to handle the uncertainty of propositions that an expert system is asked to deal with. Fuzzy logic is one such system. See Lofti Zadeh, “Fuzzy Logic and Approximate Reasoning,” Synthese 30 (1975): 407-428. Zadeh’s fuzzy logic has a number of limitations, and other systems for uncertainty have grown in popularity. See Edward Shortliffe and Bruce Buchanan, “A Model of Inexact Reasoning in Medicine,” Mathematical Biosciences 23 (1975): 350-379.
26. The role of expert systems and knowledge-based systems in the economies of the future and the implications of the Japanese fifth-generation project, are discussed in Edward Feigenbaum and Pamela McCorduck, The Fifth Generation.
27. See Edward Feigenbaum, “The Art of Artificial Intelligence: Themes and Case Studies in Knowledge Engineering,” Fifth International Joint Conference on Artificial Intelligence, Cambridge Mass., 1977.
28. The experiences and contributions of the DENDRAL experiments are recorded and analyzed in detail in R. Lindsay, B. G. Buchanan, E. A. Feigenbaum, and J. Lederberg, DENDRAL: Artificial Intelligence and Chemistry (New York: McGraw-Hill, 1980).
29. An excellent article reviewing the research on, and lessons from, the two systems is Bruce Buchanan and Edward Feigenbaum, “DENDRAL and Meta-DENDRAL: Their Applications Dimension” Artificial Intelligence Journal 11 (1978): 5-24.
30. Victor L. Yu, Lawrence M. Fagan, S. M. Wraith, William Clancey, A. Carlisle Scott, John Hannigan, Robert Blum, Bruce Buchanan, and Stanley Cohen, “Antimicrobial Selection by Computer: A Blinded Evaluation by Infectious Disease Experts,” Journal of the American Medical Association 242, no. 12 (1979): 1279-1282.
31. The results of the MYCIN project at Stanford have been very influential on current thinking in artificial intelligence. They are presented and analyzed in Bruce Buchanan and Edward Shortliffe, eds., Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project (Reading, Mass.: Addison-Wesley, 1984).
32. The expert-system industry is a burgeoning one. See Paul Harmon and David King, Expert Systems: Artificial Intelligence in Business. The diversity of application areas for expert systems is remarkable. See Terri Walker and Richard Miller, Expert Systems ’87 (Madison, Ga.: SEAI Technical Publications, 1987).
33. Harry Newquist, ed., AI Trends ’87: A Comprehensive Annual Report on the Artificial Intelligence Industry (Scottsdale, Ariz.: DM Data, 1987). Expert systems are changing the way problem solving is handled in corporations across the world. See Edward Feigenbaum, Pamela McCorduck, and Penny Nii, The Rise of the Expert Company.
34. See Edward Feigenbaum, “The Art of Artificial Intelligence: Themes and Case Studies in Knowledge Engineering,” Fifth International Joint Conference on Artificial Intelligence, Cambridge Mass., 1977.
35. The MYCIN expert system originally appeared as Edward Shortliffe’s doctoral dissertation in 1974. Other Stanford dissertations explored further the broad concepts behind MYCIN and produced some important tools and applications. Randall Davis’s TEIRESIAS, an interactive tool to help the knowledge engineer structure expertise, is presented in Randall Davis, “Applications of Meta-Level Knowledge to the Construction, Maintenance, and Use of Large Knowledge Bases,” Ph.D. dissertation, Stanford University, Artificial Intelligence Laboratory, 1976. William van Melle succeeded in showing that in keeping with the conceptual framework proposed earlier by Feigenbaum et al., the inference engine and the knowledge base could in fact be separated out. Van Melle’s system, EMYCIN, represented the structure of inferences and reasoning in MYCIN. See W. van Melle, “A Domain-Independent System That Aids in Constructing Knowledge-Based Consultation Programs,” Ph.D. dissertation, Stanford University, Computer Science Department, 1980.
36. EMYCIN was combined with a knowledge-base on pulmonary disorder diagnosis to produce PUFF. See Janice Aikens, John Kunz, Edward Shortliffe, and Robert Fallat, “PUFF: An Expert System for Interpretation of Pulmonary Function Data,” in William Clancey and Edward Shortliffe, eds., Readings in Medical Artificial Intelligence: The First Decade.
37. William Clancey and Reed Letsinger, “NEOMYCIN: Reconfiguring a Rule-Based Expert System for Application to Teaching,” in William Clancey and Edward Shortliffe, eds., Readings in Medical Artificial Intelligence: The First Decade; and E. H. Shortliffe, A. C. Scott. M. Bischoff, A. B. Campbell, W. van Melle, and C. Jacobs, “ONCOCIN: An Expert System for Oncology Protocol Management,” in Proceedings of the Seventh International Joint Conference on Artificial Intelligence (Menlo Park, Calif.: American Association for Artificial Intelligence, 1981), pp. 876-881.
38. See Ramesh Patil, Peter Szolovits, and William Schwartz, “Causal Understanding of Patient Illness in Medical Diagnosis,” in William Clancey and Edward Shortliffe, eds., Readings in Medical Artificial Intelligence: The First Decade.
39. Organization of knowledge is especially difficult when the domains are as broad as that of CADUCEUS, the system developed chiefly by Harry Pople and Jack Myers. See Harry Pople, “Heuristic Methods for Imposing Structure on III-Structured Problems: The Structure of Medical Diagnostics,” in Peter Szolovits, ed., Artificial Intelligence in Medicine (Boulder, Col.: West View Press, 1982).
40. A short overview of the performance of CADUCEUS is Harry Pople, “CADUCEUS: An Experimental Expert System for Medical Diagnosis,” in Patrick Winston and Karen Prendergast, eds., The AI Business.
41. A recent evaluation from the medical community of the performance and potential of medical artificial intelligence is William Schwartz, Ramesh Patil, and Peter Szolovits, “Artificial Intelligence in Medicine: Where Do We Stand?” New England Journal of Medicine 316 (1987): 685-688.
42. See William Schwartz, Ramesh Patil, and Peter Szolovits, “Artificial Intelligence in Medicine: Where Do We Stand?” New England Journal of Medicine 316 (1987): 68588.
43. For applications of artificial intelligence in a wide variety of areas, including finance, see Wendy Rauch-Hindin, Artificial Intelligence in Business, Science, and Industry.
44. The structure of Prospector is explained in R. O. Duda, J. G. Gaschnig, and P. E. Hart, “Model Design in the PROSPECTOR Consultant System for Mineral Exploration,” in D. Michie, ed., Expert Systems in the Micro-Electronic Age (Edinburgh: Edinburgh University Press, 1979). A report on Prospector’s role in finding the molybdenum deposit in Washington is in A. N. Campbell, V. F. Hollister, R. O. Duda, and P. E. Hart, “Recognition of a Hidden Mineral Deposit by an Artificial Intelligence Program,” Science 217, no. 3 (1982). Prospector is also discussed in Avron Barr, Edward Feigenbaum, and Paul Cohen, eds., The Handbook of Artificial Intelligence (Los Altos, Calif.: William Kaufman, 1981).
45. Digital Equipment Corporation’s AI projects are described in Susan Scown, The Artificial Intelligence Experience (Maynard, Mass.: Digital Press, 1985).
46. Many of these expert-system products are described in Paul Harmon and David King, Expert Systems: Artificial Intelligence in Business, pp. 77-133
47. Two overviews of the goals and constituent projects of the Strategic Computing Initiative are Dwight Davis, “Assessing the Strategic Computing Initiative,” High Technology, April 1985; and Karen McGraw, “Integrated Systems Development,” DS&E (Defense Science and Electronics), December 1986.
48. The Pilot’s Associate is assessed by two Air Force officers in Ronald Morishige and John Retelle, “Air Combat and Artificial Intelligence,” Air Force Magazine, October 1985.
49. Solutions to some of the limitations of expert systems imposed by current architectures are discussed in Randall Davis, “Expert Systems: Where Are We? And Where Do We Go From Here?” MIT Artificial Intelligence Laboratory, AI memo 665, 1982.
50. Until recently, machine learning has been a neglected area within artificial intelligence, perhaps because of the many difficulties underlying the problem. An important collection of papers on machine learning is Ryszard Michalski, Jaime Carbonell, and Tom Mitchell, eds., Machine Learning-An Artificial Intelligence Approach (Palo Alto, Calif.: Tioga Publishing Company, 1983).
51. Douglas Lenat wrote AM (Automated Mathematician) as an experiment in causing machine learning by discovery, in the area of number theory. EURISKO is an improved discovery program. The systems are discussed in Douglas Lenat, “Why AM and EURISKO Appear to Work,” Artificial Intelligence Journal23 (1984): 269-294.
52. Robert Hink and David Woods, “How Humans Process Uncertain Knowledge,” AI Magazine, Fall 1987. This paper is written primarily to assist knowledge engineers in structuring domain knowledge in a statistically accurate manner.
53. The cognitive and behavioral aspects of human decision making under uncertainty are considered in an important collection of papers: Daniel Kahneman, Paul Slovic, and Amos Tversky, eds., Judgement under Uncertainty Heuristics and Biases. The essays in this volume assess intriguing aspects of the way people process and interpret information.
54. See Samuel Holtzmann, Intelligent Decision Systems (Reading, Mass.: Addison-Wesley, 1989).
55. This is not surprising, since language is a principle means of expressing thought. The entire field of psycholinguistics is devoted to studying the connection between language and thought. So strong is the appeal of this connection that some believe the Whorfian hypothesis, which, loosely stated, holds that there can be no thought without language. Others accept a much weaker form of the Whorfian hypothesis: that there has to be a language of thought, a language that is not necessarily the same as one’s spoken language See J. Fodor, T. Bever, and M. Garrett, The Psychology of Language (New York: McGraw-Hill, 1975); and Benjamin Whorf, Language, Thought, and Reality: Selected Writings (Cambridge, Mass.: MIT Press, 1956).
56. These and other theoretical aspects of computational linguistics are covered in Mary D. Harris, Introduction to Natural Language Processing.
57. Terry Winograd has cogently argued that natural languages assume an enormous quantity of background knowledge. A computer system that lacks this knowledge will not be able to understand language in the sense that the speaker would expect a human listener to. See Terry Winograd, “What Does It Mean to Understand Language,” Cognitive Science 4 (1980).
58. Y. Bar-Hillel, “The Present Status of Automatic Translation of Languages,” in F. L. Alt, ed., Advances in Computers, vol. 1 (New York: Academic Press, 1960).
59. An account of the impressive performance of Logos appears in Tim Johnson, Natural Language Computing: The Commercial Applications (London: Ovum, 1985), pp. 160-164.
60. There is more to what our statements mean than what we actually say. We are generally concerned with the practical effects of what we say. Some kinds of speech are actions, and such expressions are referred to as speech acts. See John Searle, Speech Acts (Cambridge: Cambridge University Press, 1969).
61. Metaphors and idioms are a powerful way to communicate. Lakoff argues that metaphors are not merely literary devices but permeate every aspect of everyday thought. See Mark Johnson and George Lakoff, Metaphors We Live By.
62. Terry Winograd, “What Does It Mean to Understand Language,” Cognitive Science 4 (1980).
63. Much has been written about SHRDLU, since it demonstrates deep understanding and reasoning within its limited area of specialty. Winograd’s 1970 thesis on SHRDLU is slightly modified and published as Terry Winograd, Understanding Natural Language (New York: Academic Press, 1972). A brief presentation of the main ideas appears as Terry Winograd, “A Procedural Model of Language Understanding,” in Roger Schank and Kenneth Colby, eds., Computer Models of Thought and Language (San Francisco: W. H. Freeman, 1973).
64. That toy worlds offer abstractions of significant value is argued by Marvin Minsky and Seymour Papert in “Artificial Intelligence Progress Report,” MIT Artificial Intelligence Laboratory, AI memo 252, 1972.
65. A short article about Harris and Intellect is Barbara Buell, “The Professor Getting Straight As on Route 128,” Business Week, April 15, 1985.
66. Scripts appeared as early as 1973. See Robert Abelson, “The Structure of Belief Systems,” in Roger Schank and Kenneth Colby, eds., Computer Models of Thought and Language (San Francisco: W. H. Freeman, 1973). But their use as a powerful mechanism for knowledge representation became sophisticated only a few years later. The standard reference on scripts is Roger Schank and Robert Abelson, Scripts, Plans, Goals, and Understanding (Hillsdale, N.J.: Erlbaum, Lawrence Associates, 1977).
67. Schank’s efforts at Cognitive Systems are described by him in Frank Kedig, “A Conversation with Roger Schank,” Psychology Today, April 1983. Roger Schank has since resigned all major roles at Cognitive Systems.
68. An excellent survey of the natural language business is Tim Johnson, Natural Language Computing: The Commercial Applications (London: Ovum, 1985).
69. Translating text by computer is a rapidly growing business. See Harry Newquist, ed., AI Trends ’87: A Comprehensive Annual Report on the Artificial Intelligence Industry (Scottsdale, Ariz.: DM Data, 1987).
70. See Harry Newquist, ed., AI Trends ’87: A Comprehensive Annual Report on the Artificial Intelligence Industry (Scottsdale, Ariz.: DM Data, 1987).
71. The production of R.U.R. and its implications for robots are discussed in Jasia Reichardt, Robots: Fact, Fiction, and Prediction, a delightful book on the history and future of robots.
72. Some of these early robots are described in Reichardt, Robots: Fact, Fiction, and Prediction.
73. This generation of robots and their role in factory automation is examined by Isaac Asimov with his usual scientific clarity in Isaac Asimov and Karen Frenkel, Robots: Machines in Man’s Image.
74. See Harry Newquist, ed., AI Trends ’87: A Comprehensive Annual Report on the Artificial Intelligence Industry (Scottsdale, Ariz.: DM Data, 1987).
75. Today the importance of robot programming is immense, since programming is the primary path to adaptive robots. See Tomas Lozano-Perez, “Robot Programming,” MIT Artificial Intelligence Laboratory, memo 698, 1982.
76. Isaac Asimov and Karen Frenkel, Robots: Machines in Man’s Image.
77. Fully automatic factories are unusual today. More common are plants whose organization and operation rely significantly on robotic machinery, while human workers handle other important operations. The structure of such production units is realistically described in Christopher Joyce, “Factories Will Measure As They Make,” New Scientist, September 4, 1986.
78. What will the factory of the future be like? Some analyses are put forward in Philippe Millers, “Intelligent Robots: Moving Toward Megassembly,” and Paul Russo, “Intelligent Robots: Myth or Reality.” Both of these essays appear in Patrick Winston and Karen Prendergast eds., The AI Business. One writer speculates that fully automated factories will be moved away from earth, and we will soon be industrializing outer space. See Lelland A. C. Weaver, “Factories in Space,” The Futurist, May-June, 1987.
79. Isaac Asimov and Karen Frenkel, Robots: Machines in Man’s image.
80. See Gene Bylinsky, “Invasion of the Service Robots,” Fortune, September 14, 1987.
81. Gene Bylinsky, “Invasion of the Service Robots,” Fortune, September 14, 1987.
82. For details on Odex and other robots being used to increase safety for human workers in nuclear plants, see Steve Handel, “AI Assists Nuclear Plant Safety,” Applied Artificial Intelligence Reporter, June 1986. See “High Tech to the Rescue,” a special report in Business Week, June 16, 1986 for a description of Allen Bradley’s factory. Also see Gene Bylinsky, “Invasion of the Service Robots,” Fortune, September 14, 1987.
83. Some of these new methodologies are described in the context of artificial legs in Marc H. Raibert and Ivan Sutherland, “Machines That Walk,” Scientific American, January 1983.
84. Anderson’s Ping-Pong player was an outcome of his doctoral work at the University of Pennsylvania. The design and construction of this robot are detailed in Russell Anderson, “A Robot Ping-Pong Player” (Cambridge: MIT Press, 1985).
85. The dexterity and versatility of some of today’s robotic hands is certainly encouraging. A report, accompanied by some excellent photographs, appears in Daniel Edson, “Giving Robot Hands a Human Touch,” High Technology, September 1985.
86. An informative article on what the voice-activated robots of Leifer and Michalowski could do for the disabled is Deborah Dakins, “Voice-Activated Robot Brings Independence to Disabled Patients,” California Physician, August 1986. Studies in robotics are leading to an important industry: the eventual production of artificial limbs, hearts, and ears. See Sandra Atchison, “Meet the Campus Capitalists of Bionic Valley,” Business Week, May 5, 1986.
87. The Waseda robotic musician is an interesting synthesis of a variety of technologies. There are two excellent references on Wabot-2. The performances aspects of the robot are covered in Curtis Roads, “The Tsukuba Musical Robot,” Computer Music Journal, Summer 1986. The design and engineering aspects of the robot are covered in a set of articles authored by the Waseda team itself. These articles appear in a special issue of the university’s research bulletin: “Special Issue on WABOT-2,” Bulletin of Science and Engineering Research Laboratory (Waseda University) no. 112 (1985).
88. Paul McCready’s unconventional experiments in aerodynamics are quite fascinating. One can meet him and his flying machines in Patrick Cooke, “The Man Who Launched a Dinosaur,” Science 86, April 1986.
89. The Defense Department’s Autonomous Land Vehicle project has produced at least two transportation “robots” that can be used in terrain that is not passable by conventional means. The Adaptive Suspension Vehicle, which was developed primarily at Ohio State University, is described in Kenneth Waldron, Vincent Vohnout, Arrie Perry, and Robert McGhee, “Configuration Designing of the Adaptive Suspension Vehicle,” International Robotics Research Journal, Summer 1984. The Terregator (another vehicle) and other projects of robotics groups at Carnegie-Mellon University are described in Eric Lerner, “Robotics: The Birth of a New Vision,” Science Digest, July 1985.
90. The intelligence of remotely piloted aircraft are described in Peter Gwynne, “Remotely Piloted Vehicles Join the Service,” High Technology, January 1987, pp. 38-43.
91. The contribution of each these disciplines to the technology underlying robots is described in the important review article Michael Brady, “Artificial Intelligence and Robotics,” MIT Artificial Intelligence Laboratory, AI memo no. 756, 1983.
92. That most robots today function in only organized or artificial environments has been a major concern to Rodney Brooks, an MIT roboticist whose mobile robots and artificial insects perform very simple tasks in the dynamic environments we find ourselves in everyday. See Rodney Brooks, “Autonomous Mobile Robots,” in W. Eric Grimson and Ramesh Patil, eds., AI in the 1980s and Beyond.
93. Noel Perrin, a professor at Dartmouth, argues that even though robots are not yet rampant in households, research in robotics has been successful enough to warrant a serious look at the impact robots will ultimately have on society. See Noel Perrin, “We Aren’t Ready for the Robots,” Wall Street Journal, editorial page, February 25, 1986.
94. Japan’s Fifth Generation Project and the role of ICOT and MITI are presented in their technological, personal, and sociopolitical dimensions in the well-written book Edward Feigenbaum and Pamela McCorduck, The Fifth Generation. This provocative book served as a rallying cry for the American industry’s efforts to respond to ICOT.
95. The first complete description of the Japanese fifth-generation project is “Outline of Research and Development Plans for Fifth Generation Computer Systems,” Institute for New Generation Computer Technology (ICOT), Tokyo, May 1982. Descriptions of work in progress are frequently released by ICOT through its periodicals, conference proceedings, and research reports. ICOT’s primary journal is ICOT Journal Digest.
96. A brief but complete description of the American and European responses to the Japanese effort appears as chapter 7 in Susan J. Scown, The Artificial Intelligence Experience: An Introduction (Maynard, Mass.: Digital Press, 1985). Perhaps the most thorough coverage of these international efforts appears in Fifth Generation Computers: A Report on Major International Research Projects and Cooperatives (Madison, Ga.: SEAI Technical Publications, 1985).
97. See Edward Feigenbaum and Pamela McCorduck, The Fifth Generation, 1983, pp. 774-226, and also Susan J. Scown, The Artificial Intelligence Experience: An Introduction (Maynard, Mass.: Digital Press, 1985), pp. 150-152.
98. Antitrust laws remain a problem in the operation of MCC. See David Fishlock, “The West Picks Up on the Japanese Challenge: How US Is Rewriting Anti-Trust Laws,” Financial Times (London), January 27, 1986.
99. Susan J. Scown, The Artificial Intelligence Experience: An Introduction (Maynard, Mass.: Digital Press, 1985), pp. 154-155.
100. Research funded and administered by Alvey is described in its publications. See “Alvey Program Annual Report, 1987,” Alvey Directorate, London, 1987.
101. Susan J. Scown, The Artificial Intelligence Experience: An Introduction (Maynard, Mass.: Digital Press, 1985), pp. 153-154.
102. See Fifth Generation Computers A Report on Major International Research Projects and Cooperatives (Madison, Ga.: SEAI Technical Publications, 1985).
103. Not everyone is concerned about the future of Japan’s fifth-generation computer systems. See J. Marshall Unger, The Fifth Generation Fallacy: Why Japan Is Betting Its Future on Artificial Intelligence (Oxford: Oxford University Press, 1987).
The Science of Art
1. See the articles published in the Computer Music Journal, where much of this revolution is documented. A selection of such articles may be found in Curtis Roads, The Music Machine: Selected Readings from “Computer Music Journal.“Articles on computer music also appear in the journal Computers and the Humanities.
2. The capacities and limitations of digital technology are reviewed in parts 1 and 2 of Curtis Roads and John Strawn, eds., Foundations of Computer Music. On digital tone generation, see chapter 13 of Hal Chamberlin, Musical Applications of Microprocessors.
3. For a general introduction to the principles of music synthesis and a brief history, see chapter 1 of Chamberlin’s Musical Applications. Chapter 18 describes music-synthesis software, and chapter 19 reviews a number of synthesizers.
4. MIDI is described in Chamberlin, Musical Applications, pp. 312-316. Chamberlin made the prediction that by 1990 a new music protocol would be developed “as the weaknesses of MIDI become apparent” (p. 789).
5. In The Technology of Computer Music, a text for composers, Max V. Mathews provides an appendix on psychoacoustics and music, because, he argued, no intuitions exist for the new sounds possible with computers.
6. See chapter 16 in Chamberlin’s Musical Applications.
7. Hal Chamberlin’s “A Sampling of Techniques for Computer Performance of Music,” originally published in Byte magazine in September 1977, describes how to create four-part melodies on a personal computer. Stephen K. Roberts described his own polyphonic keyboard system in “Polyphony Made Easy,” an article originally published in Byte in January 1979. Both articles were reprinted in Christopher P. Morgan, ed., The “Byte” Book of Computer Music; see pp. 47-64 and pp. 117-120, respectively.
8. In chapter 18 of Musical Applications, Chamberlin describes programming techniques for programmed performance systems, and claims, “it is immaterial whether the synthesis is performed in real time or not, since the ‘score’ is definitely prepared outside of real time” (p. 639).
9. On the editing of sequences, see Chamberlin, Musical Applications, chapter 11.
10. Bateman discusses the role of the computer in composing in chapters 11 and 12 of Introduction to Computer Music. He relates the stochastic composition possible with the computer to the fact that Mozart once composed with the aid of a pair of dice, but he emphasizes the computer’s subservience to the human composer’s creativity. For a good review of the various selection techniques involved in AI composing programs, see C. Ames, “AI in Music,” in Stuart C. Shapiro, ed., Encyclopedia of Artificial Intelligence, vol. 1, pp. 638-642.
11. See Ames, “AI in Music,” in Shapiro, Encyclopedia of Artificial Intelligence, vol. 1 pp. 638-642. Also see S. Papert, “Computers in Education: Conceptual Issues,” in Shapiro, Encyclopedia of Artificial Intelligence, vol. 1, p. 183. Papert points out that the difficulties of performance may be circumvented by use of the computer, and that students may begin to compose in the same way that they learn to draw when studying art or to write when studying literature.
12. Music systems for personal computers are described in CMusic Software for the Apple Macintosh,” Computer Music Journal 9 (1985): 52-67. See also C. Yavelow, “. Yavelow, “Personal Computers and Music,” Journal of the Audio Engineering Society 35 (1987): 160-193.
13. A comprehensive review of computerized music notation, including a table of important systems and a useful bibliography, is found in N. P. Carter, R. A. Bacon, and T. Messenger, “The Acquisition, Representation, and Reconstruction of Printed Music by Computer: A Review,” Computers and the Humanities 22 (1988): 117-136. Professional Composer is being used by Garland Press to produce editions of sixteenth-century music (p. 130).
14. Chamberlin discusses the use of synthesizers in music education on p. 710 of Musical Applications
15. A brief assessment of the role of computers in art can be found in Philip J. Davis and Reuben Hersh, Descartes’ Dream: The World According to Mathematics, pp. 43-53. Herbert W. Francke discusses the resistance computer art encountered in “Refractions of Science into Art,” in H. 0. Peitgen and P. H. Richter, The Beauty of Fractals: Images of Complex Dynamical Systems, pp. 181-187.
16. Neal Weinstock, in Computer Animation, discusses some of the resolution limitations of home computers (see chapter 1). For an overview of graphics hardware, including output-only and display hardware, see chapter 2 in Weinstock, and chapter 3 in the standard text, J. D. Foley and A. Van Dam, Fundamentals of Interactive Computer Graphics.
17. Advances in computer graphics are occurring at a rapid rate, and the literature is vast. Several important sources of up-to-date information are the ACM Transactions on Graphics, Computer Graphics, Quarterly Report of the ACM Special Interest Group of Graphics, Computer Graphics World, and articles on computer graphics in Byte. Melvin L. Prueitt’s Art and the Computer has dazzling pictures produced using a wide variety of computer-graphics techniques and includes a series of examples of art produced on personal computers (see pp. 29 and 191-194). A review of the early history of computer graphics, complete with illustrations, is provided by H. W. Francke in Computer Graphics-Computer Art, pp. 57-105. Examples of practical applications of computer graphics are found in Donald Greenberg, Aaron Marcus, Allen H. Schmidt, and Vernon Gorter, The Computer Image: Applications of Computer Graphics. The shift from fixed images to the modern transformable computer image has stimulated a new analytical approach to graphics, exemplified by Jacques Bertin, Semiology of Graphics, trans. William J. Berg.
18. Recent developments in color-graphics displays are described in H. John Durrett, ed., Color and the Computer. Chapter 12 reviews the available technology for color hard-copy devices, and there are also chapters on color in medical images, cartography, and education applications.
19. Many of these techniques are described and illustrated in Prueitt, Art and the Computer. Some of the extraordinary achievements in rendering reflection are illustrated on pp. 144-150, and an example of the synthesis of natural and artificial scenes can be found on p. 29. An excellent example of distortion is provided by plates E to J of Foley and Van Dam, Fundamentals of Interactive Computer Graphics, which illustrate the mapping of images of a mandrill onto a series of different geometric shapes. Chapter 14 of that work provides a description of techniques being used to enhance the realism of computer imagery.
20. Those interested in the technical aspects of image transformation should consult chapter 8 of Foley and Van Dam, Fundamentals of Interactive Computer Graphics.
21. Computer artists active during the 1970s describe their relations to their chosen medium in Ruth Leavitt, Artist and Computer. When asked if his work could be done without the computer, computer artist Aldo Giorgini said, “Yes, in a fashion analogous to the one of carving marble with a sponge” (p. 12).
22. The game of life was invented by John Conway in 1970 and is one example of a cellular automaton. As Tommaso Toffoli and Norman Margolis point out, “A cellular automatamachine is a universe synthesizer.” See their Cellular Automata Machines: A New Environment for Modeling, p. 1. An entertaining explanation of games of life can be found in chapter 7 of Ivars Peterson, The Mathematical Tourist: Snapshots of Modern Mathematics. See also A. K. Dewdney, The Armchair Universe, “World Four: Life in Automata.” Dewdney explores the concepts of one-dimensional computers and threedimensional life. Chapter 25 in Elwyn R. Berlekamp, John H. Conway, and Richard K. Guy, Winning Ways for Your Mathematical Plays, is devoted to games of Life. On page 830 the authors describe how to make a Life computer: “Many computers have been programmed to play the game of Life. We shall now return the compliment by showing how to define Life patterns that can imitate computers.” See, as well, William Poundstone, The Recursive Universe: Cosmic Complexity and the Limits of Scientific Knowledge, chapters 11 and 12 and the section “Life for Home Computers.”
23. See Richard Dawkins’s 1987 work The Blind Watchmaker. Dawkins devised a simple model for evolution; starting with a stick figure, his program produces more and more complex figures often resembling actual natural shapes, such as insects. Sixteen numbers function as “genes” and determine the resulting forms, or “biomorphs.”
24. In Dawkins’s scheme, the human observer provides the natural selection, choosing “mutations” that will “survive.” An artist could use aesthetic criteria to determine the direction of “evolution.” A. K. Dewdney describes Dawkins’s program, which runs on the Mac, in “A Blind Watchmaker Surveys the Land of Biomorphs,” Scientific American, February 1988, pp. 128-131.
25. For an excellent introduction to the concept of recursion, see Poundstone, The Recursive Universe.
26. An account of Mandelbrot’s eclectic career and his discovery of the “geometry of nature” can be found in James Gleick, Chaos: Making a New Science, pp. 81-118. The bible of fractal geometry is Benoit Mandelbrot, The Fractal Geometry of Nature (1983). This work superseded Mandelbrot’s earlier volume Fractals: Form, Chance, and Dimension. Images produced by fractals are described and illustrated in the classic work by H.-0. Peitgen and P. H. Richter, The Beauty of Fractals: Images of Complex Dynamical Systems. Peitgen and Richter devoted themselves to the study of the Mandelbrot set and produced spectacular pictures, which they published and displayed (see Gleick, Chaos, pp, 229 ff).
27. See pp. 213-240 in Gleick’s Chaos. Chaotic-fractal evolutions are discussed in chapter 20 of Mandelbrot’s Fractal Geometry.
28. Some of Mandelbrot’s work on price change and scaling in economics can be found in The Fractal Geometry of Nature, pp. 334-340. For more insight into Mandelbrot’s application of fractals in economics and biology, see Gleick, Chaos, pp. 81-118. On the properties of scaling in music, see The Fractal Geometry of Nature, pp. 374-375.
29. Peterson, The Mathematical Tourist, pp. 114-116. Also see Mandelbrot, The Fractal Geometry of Nature, chapter 5.
30. Peterson, The Mathematical Tourist, pp. 126-127. On modeling clouds with fractals, see Mandelbrot, The Fractal Geometry of Nature, p. 112.
31. “One should not be surprised that scaling fractals should be limited to providing first approximations of the natural shapes to be tackled. One must rather marvel that these first approximations are so strikingly reasonable” (Mandelbrot, The Fractal Geometry of Nature, p. 19.) In “The Computer as Microscope,” in The Mathematical Tourist, Peterson points out that “natural fractals are often self-similar in a statistical sense” (p. 119). See also pp. 155-164.
32. “Computer graphics provides a convenient way of picturing and exploring fractal objects, and fractal geometry is a useful tool for creating computer images” (Peterson, The Mathematical Tourist, p. 123.)
33. Mandelbrot explores the application of fractal geometry to cosmology in The Fractal Geometry of Nature. See in particular chapter 9.
34. Mandelbrot’s discussion of the etymology is in The Fractal Geometry of Nature, pp. 4-5.
35. The concept of dimensionality is central to an understanding of fractals and to an understanding of the structure of nature. See chapter 3 in Mandelbrot, The Fractal Geometry of Nature.
36. See Mandelbrot, “Index of Selected Dimensions,” in Fractals: Form, Chance, and Dimension, p. 365, where the seacoast dimension is given as 1.25. Mandelbrot devotes chapter 6 of The Fractal Geometry of Nature to “Snowflakes and Other Koch Curves.” In “The Koch Curve Tamed” he gives its fractal dimension as 1.2618 (p. 36). Peterson describes the Koch curve, created in 1904, in The Mathematical Tourist, pp. 116-119.
37. See chapter 10 of Mandelbrot, The Fractal Geometry of Nature, and the chapter “Strange Attractors” in Gleick, Chaos.
38. See Mandelbrot on fractal art, pp. 23-24 of The Fractal Geometry of Nature. He points out that the images created by fractals may be reminiscent of the work of M. C. Escher because Escher was influenced by hyperbolic tilings, which are related to fractal shapes (p. 23). Mandelbrot also suggests that the work of certain great artists of the past, when it illustrated nature, exemplified “issues tackled by fractal geometry”: the examples are the frontispiece of Bible moralisée illustrées, Leonardo’s Deluge, and Hokusai’s Great Wave (see pp. C1, 2, 3, 16). Some striking examples of fractal images are described and illustrated in Prueitt’s Art and the Computer (pp. 119, 121-124, 127, 166, 169). Alan Norton has produced beautiful and bizarre complex shapes by generating and displaying geometric fractals in three dimensions (see Prueitt, pp. 123-124). For images that resemble natural landscapes, see plates C9, C11, C13, and C15 in Mandelbrot’s Fractal Geometry of Nature. Mandelbrot himself claims that these artificial landscapes are the fractal equivalent of the “complete synthesis of hemoglobin from the component atoms and (a great deal of) time and energy” (p. C8).
39. On sophisticated animation systems, see chapter 4, in Nadia Magenat-Thalman and Daniel Thalman, Computer Animation: Theory and Practice. On fractals and their use in generating images, see pp. 106-110.
40. Harold Cohen, “How to Draw Three People in a Botanical Garden,” AAAI-88, Proceedings of the Seventh National Conference on Artificial Intelligence, 1988, pp. 846-855. Some of the implications of AARON are discussed in Pamela McCorduck, “Artificial Intelligence: An Aperçu,” Daedalus, Winter 1988, pp. 65-83. This issue of Daedalus, devoted to AI, has been published in book form as Stephen R. Graubard, ed., The Artificial Intelligence Debate: False Starts, Real Foundations.
41. For a comparison between traditional animation procedures and new computer methods, see chapters 1 and 2 in Magenat-Thalman and Thalman, Computer Animation. See also Weinstock, Computer Animation.
42. Prueitt, Art and the Computer, p. 30. Indeed, Pruiett suggests that computer art “may be closer to the human mind and heart than other forms of art. That is, it is an art created by the mind rather than by the body” (pp. 2-3).
43. The most famous of these editors is EMACS, a real-time display editor that Stallman developed in 1974 from earlier systems, in particular, TECO, developed in 1962 by Richard Greenblatt et al. For a description of EMACS, see Richard M. Stallman, “EMACS: The Extensible, Customizable, Self-Documenting Display Editor,” in David R. Barstow, Howard E. Shrobe, and Erik Sandewell, eds., Interactive Programming Environments, pp. 300-325.
44. For example the electronic Oxford English Dictionary (OED) enables scholars to answer questions that would have taken a lifetime of work in the recent past. See Cullen Murphy, “Computers: Caught in the Web of Bytes,” The Atlantic, February 1989, pp. 68-70.
45. New techniques for accessing information (on-line searches) are described in Roy Davies, ed., Intelligent Information Systems: Progress and Prospects.
46. For a recent assessment of desktop publishing, see John R. Brockmann, “Desktop Publishing-Beyond GEE WHIZ: Part 1, A Critical Overview,” and Brockmann, “Desktop Publishing-Beyond GEE WHIZ: Part 2, A Critical Bibliography of Materials,” both in IEEE Transactions on Professional Communication, March 1988.
47. New tools for conceptual organization are described in Edward Barren, Text, Context, and Hypertext.
48. In the Introduction to RACTER, The Policeman’s Beard is Half-Constructed, William Chamberlain describes the process behind RACTER’s prose: certain rules of English are entered into the computer, and what the computer produces is based upon the words it finds in its files, which are then combined according to “syntax directives.” Chamberlain concludes that this process “seems to spin a thread of what might initially pass for coherent thinking throughout the computer-generated copy so that once the program is run, its output is not only new and unknowable, it is apparently thoughtful. It is crazy ‘thinking’ I grant you, but ‘thinking’ that is expressed in perfect English.” See the discussion by A. K. Dewdney, “Conversations with RACTER,” in The Armchair Universe, pp. 77-88. Dewdney points out that RACTER is not artificially intelligent but “artificially insane” (p. 77).
49. For the history and assessment of various attempts at translation using computers, see Y. Wilks, “Machine Translation,” in Shapiro, ed. Encyclopedia of Artificial Intelligence, vol. 1, pp. 564-571.
1. Douglas R. Hofstadter, Metamagical Themas: Questing for the Essence of Mind and Pattern, p.128.
2. David Waltz, “The Prospects for Building Truly Intelligent Machines,” Daedelus, Winter 1988, p. 204.
3. The preface in Tom Forester’s Information Technology Revolution examines similar issues.
4. In the fall of 1987 an entire issue of Scientific American was devoted to this topic. In particular, see Abraham Peled, “The Next Computer Revolution,” Scientific American, October 1987, pp. 56-64.
5. See James D. Meindl, “Chips for Advanced Computing,” Scientific American, October 1987, pp. 79-81 and 86-88.
6. See Mark H. Kryder, “Data-Storage Technologies for Advance Computing,” Scientific American, October 1987, pp. 117-125.
7. Such as massively parallel processors based possibly on superconductors. See Peter J. Denning, “Massive Parallelism in the Future of Science,” American Scientist, Jan.-Feb. 1989, p. 16.
8. Marvin Minsky discusses this problem in “Easy Things Are Hard,” Society of Mind, p. 29.
9. Koji Kobayashi, Computers and Communication: A Vision of C & C, pp. 165-166.
10. See Marshall McLuhan, Understanding Media.
11. For a vision of an office system interfacing with a public communication network, see Koji Kobayashi, Computers and Communication, chapter 10. See also Roger Shank and Peter G. Childers, “The World of the Future,” in The Cognitive Computer, pp. 227-230.
12. David N. L. Levy, All about Chess and Computers. See also M. M. Botvinnik, Computers in Chess.
13. An intriguing study of the relevance of comments made by master chess players during play can be found in Jacques Pitrat, “Evaluating Moves Rather than Positions,” in Barbara Pernici and Mareo Somalvico, eds., III Convegno Internazionale L’Intelligenza Artificiale ed il Gioco Degli Scacchi (Federazione Scacchistico Italiana, Regione Lombardia, Politecnico di Milano, 1981).
14. Evidence of this are early board-game programs modeled on master players’ strategies. The 1959 checkers program of Arthur Samuels, for example, had 53,000 board positions in memory. See Peter W. Frey, “Algorithmic Strategies for Improving the Performance of Game-Playing Programs,” in Evolution, Games, and Learning: Models for Adaptation in Machines and Nature, Proceedings of the Fifth Annual International Conference of the Center for Nonlinear Studies at Los Alamos, N.M., May 20-24, 1985, p. 355.
15. Recursiveness and massive computational power allow for subtle (and hence enormously varied) solutions to algorithmic problems. See, for example, Gary Josin, “Neural Net Heuristics” BYTE, October 1987, pp. 183-192; and Douglas Lenat,”The Role of Heuristics in Learning by Discovery,” in R. Z. Michalski, J. J. Carbonell, and T. M Mitchell, eds., Machine Learning: An Artificial Intelligence Approach. Also see Monroe Newborn, Computer Chess, pp. 8-15.
76. See John Hilusha’s article, “Smart Roads Tested to Avoid Traffic Jams,” New York Times, October 18, 1988.
17. Plans are already in place for the development and use of flying vehicles. See, for instance, “Simulation of an Air Cushion Vehicle Microform,” final report for period January 1975-December 1976, Charles Stark Draper Laboratory, Cambridge, 1977.
18. For recent advances in computer and chip design, see James D. Meindl, “Chips for Advanced Computing,” Scientific American, October 1987, pp. 78-88. An extensive but less current review of the technology may be found in Alan Burns, The Microchip.
19. See Stewart Brand, The Media Lab: Inventing the Future at MIT, pp. 83-91.
20. Although the Turing test has been discussed at length in chapters 2 and 3, the general reader may further appreciate a straightforward presentation of this famous test in Isaac Malitz, “The Turing Machine,” BYTE, November 1987, pp. 348-358.
21. Research at the University of Illinois is a case in point. The Center for Superconducting Research and Development was established there in 1984 for the purpose of demonstrating that high-speed parallel processing is practical for a wide range of applications.
22. Considerable mention is given to this technology in B. Deaver and John Ruvalds, eds, Advances in Superconductivity. See especially A. Barone and G. Paternò, “Josephson Effects: Basic Concepts.”
23. See M. A. Lusk, J. A. Lund, A. C. D. Chaklader, M. Burbank, A. A. Fife, S. Lee, B. Taylor, and J. Vrba, “The Fabrication of a Ceramic Superconducting Wire,” Superconductor Science and Technology 1 (1988): 137-140.
24. For brief but informative article on the subject, see Robert Pool, “New Superconductors Answer Some Questions,” Science 240 (April 8, 1988): 146-147.
25. See David Chaffee, The Rewiring of America: The Fiber Optics Revolution. Also, a reliable, technically informative account may be found in Robert G. Seippel, Fiber Optics.
26. A terse and brief account of the new technology that serves as the basis for these advances may be found in Tom Forester, The Materials Revolution, pp. 362-364.
27. It is interesting to compare the mentions made of molecular computing in Tom Forester’s High Tech Society, p. 39, with those in his Materials Revolution, pp. 362-364.
The Impact On . . .
1. Seymour Papert makes a convincing argument for extensive use of computers in the classroom in “Computers and Computer Cultures,” Mindstorms, pp. 19-37.
2. See notes 5 and 14 to the prolog of this book for sources for these statistics.
3. Wassily Leontief and Faye Duchin, eds., The Future Impact ofAutomation on Workers, p. 18. See also note 13 to the prolog of this volume.
4. Wassily Leontief and Faye Duchin, eds., The Future Impact of Automation on Workers, pp. 20-21.
5. Wassily Leontief and Faye Duchin, eds., The Future Impact of Automation on Workers, pp. 12-19. See also James Jacobs, “Training Needs of Small and Medium Size Firms in Advanced Manufacturing Technologies,” in 1987 IEEE Conference on Management and Technology, Atlanta, Georgia, October 27-30, 1987, pp. 117-123.
6. Wassily Leontief and Faye Duchin, eds., The Future Impact of Automation on Workers. pp. 25-26, 52.
7. Wassily Leontief, “The World Economy to the Year 2000,” Scientific American, Sept. 1980, pp. 206-231.
8. Wassily Leontief and Faye Duchin, eds., The Future Impact of Automation on Workers, pp. 25-26, 92.
9. Tom Forester, High Tech Society, p. 181.
10. See “The Electronic Office,” in Tom Forester, High Tech Society, pp. 195-217.
11. Indeed, floppy disks and CDs have popularly been called a “new papyrus.”
12. See Ted H. Nelson, “Getting It out of Our System,” in G. Schechler, ed., Information Retrieval, pp. 191-210. For a recent account that reveals changes and advances, see Mark Bernstein. “The Bookmark and the Compass: Orientation Tools for Hypertext Users,” SIG OIS Bulletin 9 (1988): 34-45.
13. Note, for example, the considerable professional reshuffling in the labor force to accommodate burgeoning technological advances in the workplace. See S. Norman Feingold and Norma Reno Miller, Emerging Careers: New Occupations for the Year 2000 and Beyond, vol. 1.
14. For a discussion of children’s psychological responses to computers, see “The Question of ‘Really Alive,”‘ in Sherry Turkle, The Second Self, pp. 324-332.
15. For the sake of brevity I shall use the term “education” here in the conventional sense of education during the school years, but the the ideas expressed in this section pertain to education in general. See, for example, Elizabeth Gerver, “Computers and Informal Learning,” in Humanizing Technology Computers in Community Use and Adult Education; and Jean-Dominique Warnier, “The Teaching of Computing,” in Computers and Human Intelligence, p. 113 ff.
16. There is a growing volume of literature on the subject of children and computers. See, for instanceSherry Turkle, , “Child Programmers,” in The Second Self, pp. 93-136; Robert Yin and J. L White, “Microcomputer Implementation in Schools,” in Milton Chen and William Paisley, eds., Children and Microcomputers, pp. 109-128; Seymour Papert, Daniel Watt, Andrea di Dessa, and Sylvia Weir, “Final Report of the Brookline LOGO Project,” MIT AI memo no. 545, Sept. 1979; and R. D. Pea and D. M. Kurland, “On the Cognitive and Educational Benefits of Teaching Children Programming: A Critical Look,” in New Ideas in Psychology., vol. 1.
17. Debra Liberman, “Research and Microcomputers,” in Milton Chen and William Paisley, eds., Children and Microcomputers, pp. 60-61.
18. See John Seely Brown, “Process versus Product,” in Chen and Paisley, eds., Children and Microcomputers, pp. 248-266.
19. Functioning well right now are networked computerized card catalogs linking various university libraries. For a skeptical view of networking, see Theodore Roszak, “On-Line Communities: The Promise of Networking,” in The Cult of Information: The Folklore of Computers and the True Art of Thinking.
20. Tom Forester swiftly chronicles the obstacles that have contributed to this situation in High Tech Society, pp. 165-169. These obstacles notwithstanding, from the fall of 1980 to the spring of 1982 the number of computers more than tripled in American schools. See also Jack Rochester and John Gantz, The Naked Computer, p. 104.
21. Edward Tenner, Harvard Magazine 90 (1988): 23-29.
22. Tenner, Harvard Magazine 90 (1988): 23-29.
23. For a dispassionate and informative presentation of the Strategic Defense Initiative in which a “flexible nuclear response” is examined, see Stephen J. Cimbala, “The Strategic Defense Initiative,” in Stephen J. Andriole and Gerald W. Hople, eds., Defense Applications of Artificial Intelligence, pp. 263-291.
24. See Randolf Nikitta, “Artificial Intelligence and the Automated Tactical Battlefield,” in Allan M. Din, ed., Arms and Artificial Intelligence: Weapons and Arms Control of Applications of Advanced Computing, pp. 100-134.
25. McGeorge Bundy, George F. Kennan, Robert S. McNamara, and Gerard Smith, “Nuclear Weapons and the Atlantic Alliance,” Foreign Affairs, Spring 1982, pp. 753-768. Another thoughtful and succinct appraisal of the subject of nuclear deterrence is Leon Wieseltier’s Nuclear War, Nuclear Peace.
26. See Edward C. Taylor, “Artificial Intelligence and Command and Control-What and When?” in Andriole and Hople, eds., Defense Applications of Artificial Intelligence, pp. 139-149. See also Alan Borning, “Computer System Reliability and Nuclear War,” in Communications of the ACM 30 (1987): 124.
27. See Alan Borning, “Computer System Reliability and Nuclear War,” Communications of the ACM 30 (1987): 112-131.
28. See P. R. Cohen, D. Day, J. DeLisio, M. Greenberg, R. Kjeldsen, D. Suthers, and P. Berman, “Management of Uncertainty in Medicine,” International Journal of Approximate Reasoning, Sept. 1987, pp. 103-116.
29. See Tom Forester’s discussion of recent innovations in biotechnology in his Materials Revolution, pp. 362-364.
30. For the well initiated, the excellent but highly technical journal Biomaterials, Artificial Cells, and Artificial Organs: An International Journal frequently publishes articles relevant to the subject.
31. See Glenn D. Rennels and Edward H. Shortliffe, “Advanced Computing for Medicine,” Scientific American, October 1987, pp. 154-161. See also James S. Bennell, “ROGET: A Knowledge-Based System for Acquiring the Conceptual Structure of a Diagnostic Expert System,” Journal of Automated Reasoning 1 (1985): 41-50.
32. For a clear overview of the vision challenge, see Michael Brady, “Intelligent Vision,” in Grimson and Patil, eds., AI in the 1980s and Beyond, pp. 201-243.
33. While we are still far from this achievement, advances are being made in the field of music notation. See, for instance, John S. Gourlay, “A Language for Music Printing,” Communication of the ACM 29 (1986): 388-401.
34. Of interest to the reader may be the anonymous publication “Vladimir Ussachevsy: In Celebration of his Seventy-Fifth Birthday,” University of Utah, 1987, pp. 8-9.
35. See David Dickson, “Soviet Computer Lag,” Science, August 1988, p. 1033. Some other interesting facts about Soviet restrictions are to be found in Rochester and Gantz, “How’s Your CPU, Boris?” The Naked Computer.
1. See pp. 677-678 of Douglas Hofstadter’s Gödel, Escher, Bach: An Eternal Golden Braid (New York: Basic Books, 1979) for a fuller account of his concept of potential computer weaknesses.
2. Marvin Minsky, Society of Mind, pp. 186, 288.
3. The general reader will find pertinent to the topic Paul M. Churchland’s philosophical and scientific examination throughout Matter and Consciousness.
4. See Einstein’s letters of August 9, 1939, and December 22, 1950, to E. Schrödinger, in K. Przibram, ed., Letters on Wave Mechanics, pp. 35-36 and 39-40.
5. Admittedly, some disavow the applicability of subatomic metaphors to any other aspect of life. See Paul G. Hewitt, Conceptual Physics, 2nd ed., pp. 486-487.