Technology Fear Factor
July 21, 2002 by Daintry Duffy, Sari Kalin
Three futurists — George Gilder, Ray Kurzweil, and Jaron Lanier — agree that emerging dangerous technologies will require smarter defenses, such as standards diversity, decentralized systems, a transparent society, better communications between factions, and mutually beneficial collaboration of business leaders.
Originally published in Darwin Magazine May 2002. Published on KurzweilAI.Net July 21, 2002.
A year ago, if you’d asked someone to name the five most dangerous devices created by human hands, a box cutter probably wouldn’t have made the list. Times change, and so do our fears. Just like manual devices, technologies can be used for good and they can also be used for evil.
"Technology has always been a double-edged sword," says Ray Kurzweil, artificial intelligence researcher and inventor. "[The harnessing of] fire was a great breakthrough but certainly could be used for destructive purposes. The wheel gave us greater mobility but also allowed armies to form."
George Gilder, chairman of Great Barrington, Mass.-based Gilder Publishing, is also a senior fellow at the Discovery Institute, where he directs its program on high tech and public policy.
Most Dangerous Technology: Remember the old joke: What’s the most dangerous part of the car? The nut behind the wheel. That’s pretty much Gilder’s take. "The root cause of most of the technological dangers is the individual human being who does not believe in the Golden Rule — who believes in a zero-sum society and imagines that his own success can only come at the expense of others," he says.
Ray Kurzweil, artificial intelligence researcher, founder of Kurzweil Technologies in Wellesley, Mass., and inventor of, among other things, music synthesizers and a reading machine for the blind.
Most Dangerous Technology: Kurzweil won’t label any technology as inherently dangerous: He thinks that any technology can be used for either good or evil. Yet he feels that biotechnology and nanotechnology are exponentially more powerful than any technology of the past. To him, nanotechnology — the creation of tiny, intelligent self-replicating devices — is the more powerful of the two. "Ultimately, information technology will create entities that have human levels of intelligence and beyond," Kurzweil says. "Combine those with biotechnology and nanotechnology, and it’s not hard to create very dangerous scenarios."
Jaron Lanier, computer scientist, artist and virtual reality pioneer (he coined the term virtual reality); lead scientist for the National Tele-Immersion Initiative, which studies applications for the next-generation Internet.
Most Dangerous Technology: Lanier cites 20th century, one-way mass media, such as radio and TV, because it can incite mass violence. A close second, he claims, is the integration of IT and biotechnology. Although it can help us discover miracle drugs, it could also help terrorists build super-dangerous weapons. "That leads to terrifying scenarios that have been explored in science fiction — like the drug released from a basement that only wipes out people of one race," he says.
What are the Dangerous Technologies?
Darwinmag: So what’s the best defense against bad people using technology for evil purposes?
Ray Kurzweil: Few would deny that biotechnology and nanotechnology are potentially dangerous. But that doesn’t mean we should relinquish or constrain the use of these technologies. Bill Joy [in an article for Wired "Why the Future Doesn't Need Us" ] was saying, Let’s keep the beneficial technologies and get rid of the dangerous ones. But the same biotechnology that’s going to overcome cancer and disease could also allow a terrorist to create a bioengineered pathogen.
George Gilder, senior fellow at the Discovery Institute: Because evil is inexorable, it is critical that the positive, creative and anti-zero-sum forces command the leading-edge technologies. There should not be some constraint that restricts the positive and generous forces in the universe — chiefly the capitalist forces, in my point of view — from keeping the lead in these technologies. These biotech terrors are far more likely to be unleashed in a world where the positive powers have relinquished biotech research and development. A society that’s full of vibrant, productive and creative biotech companies is going to be far more alert to potential threats.
Kurzweil: I would agree. If you relinquish broad areas of technology, you’re only going to drive them underground. Only the less responsible practitioners — that is, the terrorists — will have expertise in it. If you make knowledge of these technologies widespread, then you’re going to have a lot of people thinking about how to safeguard against them. When incidents occur, you’ll have a very broad and responsive community that’ll have the tools and the knowledge to deal with them.
Let’s take one test case, the software virus, a new form of human-made, self-replicating pathogen that replicates within a computer network. When these first emerged, they were primitive, and observers said that as the viruses get more sophisticated, they’re going to destroy computer networks. But the defensive technologies actually developed more rapidly than the offensive ones because the knowledge of how to create viruses and information technology in general was very widespread. Today we have more sophisticated viruses that we attempt to keep to a nuisance level. Yes, they cause billions of dollars of damage. But the damage is less than 1 percent of the benefit that we get from computer networks.
Jaron Lanier, virtual reality pioneer: That’s a very positive model. But I do want to point out that in nature, an ecosystem’s defense against self-replicating dangerous entities is biodiversity. In our information systems, we simply have not been able to achieve that sense of protective diversity. We have this absurd monoculture, particularly on the desktop, that is practically an invitation to viruses. We really need to discover some new policy or technique to re-create this diversity.
Kurzweil: The reason we lack diversity is not because of some conspiracy. It’s because standards become, in a sense, self-replicating. Everybody gets Microsoft software because everyone else has it. If I want to send a document to someone, I hope they can read a Word document. If I send WordStar, they’re not going to be able to read it. It becomes self-perpetuating, and I’m not sure how you would create that diversity when there’s this tendency to centralize toward these standards.
Lanier: It’s a difficult problem, but it’s one that we need to have some fresh ideas about. It’s not as if biodiversity doesn’t have its costs in nature. It’s expensive to diversify, but it’s worth the price. I don’t propose a top-down enforcement of diversity — that would be a disaster. One thing that might really make a difference is advances in software technology. I’ve been researching operating systems whose fundamental binding principle is pattern recognition rather than perfect matching so that software modules can fit together more flexibly. I don’t know if such a scheme will ever work, but if it did, you could start to have a little bit less tyranny of standards and a little bit more integration of diverse components. That might be one technical solution.
Kurzweil: There are other technological solutions. If we move toward decentralization, we create a much safer society. Sept. 11 involved centralized technology with cities, buildings and airplanes. The Internet is very robust; no one has ever taken it down, for even a minute. Communication over the Internet is much safer than communication in a congregated area like a city or a building. Energy resources such as nuclear power plants and liquid natural gas tanks are inherently very dangerous. There are new decentralized energy technologies like microscopic fuel cells that are inherently much safer than those other energy technologies, which are subject to disruption.
Privacy? What Privacy?
Darwinmag: What are some strategies for getting past the danger?
Lanier: The only way for us to humanely survive dangerous technologies is to foster a society that’s increasingly transparent and, ultimately, radically transparent. Having everyone’s eyes looking at everyone else and not worrying so much about privacy is really the only way to have enough eyes looking around to make sure that nothing too dangerous happens.
There are three centers of gravity on the privacy issue — those who wish to see law enforcement have a privileged position in being able to view what others are doing, those who want to have encryption for the masses to disempower law enforcement because they are concerned about abuses, and those who trust nobody and want all eyes applied in all directions. That last category is where I find myself.
Currently, we’re in a moment where we trust our law enforcement far more than we do rogue members of our population, but that’s not necessarily going to be the case in 20 years. Rather than setting up a system where we create a center of power either for the individual or for law enforcement, what we really should seek is universal view-ability of everyone by everyone. That’s not a goal we can achieve immediately because it offends the principles we’re currently used to. But it is one we could approach gradually.
Gilder: Interestingly, in early society, everything was transparent. Everybody knew what everybody else was doing, and when they didn’t, they got very nervous and incinerated a witch. Modern society has vastly expanded the domains of privacy, a movement further enhanced through the spread of unbreakable schemes of encryption. Privacy is becoming more and more pervasive, in league with technologies that can be applied to destructive purposes.
Kurzweil: That’s true. Technologically, trends are moving away from transparency and toward unbreakable encryption. Encryption is a lot easier to build than decryption. Even if the general public allows an encryption trapdoor, people who really want to keep their communications private, like terrorists, will have access to unbreakable codes.
Lanier: The hope, though, of the transparent society is that if there is a group of people up to no good, or even a solitary person, they’ll make a mistake at some point. A person can’t live their entire life universally encrypted. There has to be some crack. And if enough people are watching at once, you’ll catch them.
Darwinmag: Let’s take this discussion in a slightly different direction. What can we do to make sure that technologists of the future — people who will grow evermore powerful — don’t lose their connection to humanity?
Lanier: Technical education has been more and more influenced by the business world in the last 10 years. Yet it’s possible to gain a technical education and know nothing at all about people — to have virtually no exposure to the humanities or the arts and to have poor communication skills. We’ve learned to teach engineering as a purely functional activity without any connection to humanity and without heart. That’s something that should be taught. Just look at the Sept. 11 terrorists — I was struck that some of them received a technical education in Germany.
Kurzweil: The terrorists had an education in a certain set of religious values from which they were able to come up with their destructive ideas.
Gilder: Their technical education wasn’t the problem.
Lanier: Right. The primary problem with the terrorists is their religious framework. But an educational environment should provide, if not a challenge, an area in which these humanistic issues are aired. Right now you can get a master’s in computer science or electrical engineering, and nobody really knows who you are outside of the classroom. That sense of anonymity in education is unhealthy.
Kurzweil: We have to assume, though, that there will be people, movements, and ideas that are of a zero-sum nature, that will be destructive to advance their own goals. We’re not going to get rid of that. The pervasiveness of al-Qaida has been shocking to people. But even if we deal with that organization, the opportunity and desire to use the powers of technology for destructive ends is going to remain. Society needs to protect itself.
Lanier: Certainly. Yet we all agree that humans are the problem, not so much technology. So we have to take responsibility for the fact that as technologists, we are highly influential in the education system that is, in turn, one of the more influential institutions in shaping young people.
Gilder: Well, the Unabomber went to Harvard. He went through the general education curriculum with me.
Lanier: Yeah, but that was in the ’60s, right? [laughter]
What Does the Future Hold?
Darwinmag: Are there some technologies that can make the world less dangerous?
Kurzweil: There are ways to use technology to create a much safer society. Virtual communication is one. It’s getting more and more realistic. The three of us are now engaged in an auditory virtual reality — that’s what the telephone has been for a century. Jaron is a pioneer in full-immersion visual virtual reality, and there are various ways that can be achieved over the Net. That will be ubiquitous by the end of this decade. We’ll ultimately have the other senses as well. We’ll be able to meet with each other in these virtual environments, and there are advantages to that. Physical violence against another person is much more difficult in a virtual environment.
Gilder: So, face-to-face negotiations with Osama bin Laden could be conducted. But to what end?
Lanier: Well, I might sound a little soppy or idealistic here, but I really do believe that when people are given the chance to contact one another, to really communicate, you can elicit a little drop of empathy even in somebody who’s been raised to hate you. We’ve seen societies do just that. For instance, Germany has been able to reconcile with its past remarkably well. So I don’t want to assume that there’s a group of people who are simply beyond hope. My view of communications technology is that with it, there is hope of reaching even people in al-Qaida. That’s my own idealism. Blame it on my California residency or something.
Darwinmag: So, what do you think the future holds?
Lanier: I was at the World Economic Forum in New York [at the beginning of February 2002], and while I was there I spent time with the summit of religious leaders and talked to various people from different religions. They were all doing their best to get along, but they did this by not really listening to one another.
Then I spent some time with the protesters outside. They didn’t even know what they were against, and they certainly didn’t know what they were for. There was just a fuzzy nihilism that was very disappointing. I also spent time with the world leaders in various meetings, and they were all doing their best but, once again, were not really on track.
However, when I spent time with businessmen, things were remarkable. There was one particular meeting of Saudi, Jordanian, and Israeli businessmen where they all really listened to each other, they were all action oriented, they were warm, they were authentic, and it was really the most hopeful meeting by far.
I don’t want to pretend that capitalism doesn’t have its flaws. But right now, capitalist enterprise is the most effective technique that we have for bridging gaps in culture, background, and experience. Business leaders have a tremendous opportunity to help build a world of increasing connectedness and trust. It’s urgent for businesses not just to make investments in troubled parts of the world but really to become involved in them and to think strategically in them. I don’t think we have any choice but to pursue that.
Gilder: You’re right on that point. Businessmen can create circles of cooperation because they know they’re dependent on the successes of others. The Golden Rule is inherent in capitalism. Your success is dependent upon the enrichment of your customers and even your competitors and your suppliers, and you have to cooperate with all of them. Business offers constant education in altruism.
A further point about the dynamic of business and interplay with technology today is that it is increasingly distributed. Power is increasingly dispersed. This dispersion of power ultimately resolves itself in the unleashing of the creativity of every individual. As each individual optimizes his own creativity, he understands that he is dependent on the creativity of others, and that understanding leads to increasing cooperation.
Kurzweil: You know, the 1991 coup against Gorbachev was broken not by Yeltsin bravely standing on a tank but by the decentralized electronic communications industry — the fax machines and early forms of e-mail. Everybody kept in touch with one another and knew what was going on. Decentralized technology is very democratizing.
I’m very optimistic about the future, and I’m hopeful that we can make progress in the 21st century without the type of pain and distress that we experienced in the 20th century.
We need to think about how we can build our technologically rich society in a way that avoids potential dangers, and we need to examine the destructive nature of the human being as a scientific and technological problem that we want to actually address. One thing we’ve shown is that if we put our minds to a problem and think about its dimensions, we’re very often successful at coming up with a solution.
Copyright © by Darwin Magazine May 2002.