Can Computers Decide?

July 6, 2001 by Roger Schank

A look at how computers make decisions. By saying computers can’t truly reason, are we being “fleshists?”

Originally published March 22, 2001 at iMP Magazine. Published on KurzweilAI.net July 6, 2001.

Humans can’t really know all that much. They only have the particular experiences they have had, and those can differ by quite a bit from the experience of the next guy. A computer has the benefit of all of the experiences that any set of humans is likely to tell to the computer.

The real question is whether we want them to decide. My career has encompassed many different areas of software design and deployment, but always this question persists: will people let computers decide? When I was working in artificial intelligence (AI), we began to realize that we could create machines that not only could decide, but would likely to be able to decide better than humans can. Now that I work in the area of software for education, we realize we can create courses that will function quite nicely without any human instructor involved, yet always we are confronted with: What about the human element? and Will the student be evaluated by a computer? These are related issues. Let me explain why. I will start with AI.

AI came of age with the advent of expert systems. These systems were rule-based and attempted to capture the rules that a decision maker uses when making a knowledge-based decision like where to drill for oil or how to diagnose a disease. These systems worked passably well, but they never got smart as a result of their experiences and thus never really rivaled a human decision maker in a complex domain.

The next generation of AI decision making systems were case-based reasoning (CBR) systems that employed cases instead of rules. They reasoned by analogy to prior experience and could integrate new experiences into their database, thus getting smarter with each new attempt at reasoning. A good CBR system depends upon having thousands of cases indexed in complex ways so that just the right experience can “come to mind” at the right time. Thus, a good CBR system could really “know” more and retrieve what it knows better than a human could.

CBR systems could get good enough so that you might want a computer decision maker to decide U.S. foreign policy. What a terrible idea, say the scoffers. But, real decision makers reason from cases too and sometimes they make profound case-based errors (my favorite being the use of the Pueblo incident as a precedent to help President Ford decide what to do in the Mayaguez incident–a match that only made sense because both involved Spanish-named ships). A computer would presumably be more consistent, less liable to emotion and stress, more mindful of historical precedent, more generally knowledgeable of prior cases and much less likely to have had a bad night’s sleep or a fight with its wife.

We found however, in the process of building CBR systems, that people were afraid of them. No one, it seemed, wanted computers to take over, or to make hard decisions without a human “in the loop” watching out to see that a good decision was made. The problem with this is that a computer can have a much larger case base than any one human can, so large that it would be hard to imagine that a human would know whether the computer had made a good decision or not. Humans can’t really know all that much. They only have the particular experiences they have had, and those can differ by quite a bit from the experience of the next guy. A computer has the benefit of all of the experiences that any set of humans is likely to tell to the computer.

Now let me turn to the field in which I currently work. For some years now I have been building educational software, not the drill and kill kind, but the kind that allows for a learn by doing experience in a complex simulated environment. I do not believe that education is well served by lectures, by computer versions of lecture courses, or by the general approach to learning that says that swallowing whole heaps of information and showing how much you have swallowed on a multiple choice test means anything at all. We learn from experience, interrupted by good just in time teaching (storytelling really). Software that is well-designed can create just the kind of experiential learning (think of air flight simulators) that is so hard to find in a book or in a classroom.

We have been building this software for some time, and now, after having started to work with Columbia University to build high quality university level courses, we are offering those courses to high schools around the country. The idea is that in a country where the football coach is the most likely candidate to teach physics and where a psychology course is ne’er to be found in any but the fanciest of highs schools, it might make sense to start migrating high-quality learn by doing online courses from a top university into the highs schools. This might make sense, and to some high schools it does, but most cry foul. You mean there won’t be a teacher present? How will the student learn? We can’t not have a teacher. What if we put a teacher in the classroom where the students are online? Well, the whole idea of online courses is that they can be taken any time anywhere and one would assume that the last place to take them would be at an assigned time in classroom. Further, who are they going to put in there? We are offering C++ and JAVA courses because they don’t have instructors in high school who can teach these courses. Does this deter them from putting a teacher in there? No, of course not. They just try to put a teacher in who “isn’t any good anyhow” and who “normally teaches something else,” just so there will be a human present.

For years I have started my classes by asking students if they thought computers could reason and I was always told that they couldn’t. I then would claim that I was actually a computer and ask if they believed I could reason. They said I could but, of course didn’t believe that I was a computer. I accused them of being “fleshists,” that is, of being prejudiced against devices that can reason that are not made of human flesh. The sad story is that we all somehow seem to believe that humans are better at reasoning than a computer could ever be, which is clearly false, or better at teaching than a computer could ever be, which is also clearly false. Of course, there are some pretty exceptional humans, but there are also some pretty bad ones. I say we give the computers the chance to show their stuff.

The computer-based online world which we are about to enter can be very powerful. We can be pretty sure that politicians, teachers, and other people with vested interests will do very little to help this world come about because they are afraid of being replaced. They are not going to be replaced by superior beings of course–that stuff is the stuff of science fiction. They could, on the other hand, be replaced in tasks that are simply too much for any one human to bear. Making complex decisions based on tremendous amounts of data would seem to be one of these. Another is bringing all the world’s knowledge to bear to help out a student in just the right way at just the right time. Well-intentioned teachers may simply not know enough or be able to spend the individual one on one time that a student might need. We have progressed beyond the time when the teacher was the most knowledgeable person in a community or school. We have also progressed beyond the time when decision makers can rely on their own experience or the experience of their assistants to deal with a complex environment. We must stop being fleshists and encourage the building of these kinds of programs as well as their deployment.

Originally published March 2001, iMP Magazine

http://www.cisp.org/imp/march_2001/03_01schank.htm