The Matrix loses its way: Reflections on The Matrix and The Matrix Reloaded

May 19, 2003 by Ray Kurzweil

(credit: Warner Bros. Pictures)

You’re going to love Matrix Reloaded — that is, if you’re a fan of endless Kung Fu fights, repetitive chase scenes, a meandering and poorly paced plot, and sophomoric philosophical musings. For much of its 2 hours and 18 minutes, I felt like I was stuck looking over the shoulder of a ten-year-old playing a video game.

It’s too bad, because the original Matrix was a breakout film, introducing audiences to a new approach to movie making, while reflecting in an elegant way on pivotal ideas about the future.

Although I disagree with its essentially Luddite stance, it raised compelling issues that have drawn intense reactions, including thousands of articles and at least a half dozen books

Is Matrix-style VR feasible?

There is a lot more to say about the original Matrix than this derivative and overwrought sequel, so let me start with that. The Matrix introduced its vast audience to the idea of full-immersion virtual reality, to what Morpheus (Laurence Fishburne) describes as a “neural interactive simulation” that is indistinguishable from real reality.

I have been asked many times whether virtual reality with this level of realism will be feasible and when.

As I described in my chapter “The Human Machine Merger: Are We Heading for The Matrix?” in the book Taking the Red Pill1, virtual reality will become a profoundly transforming technology by 2030. By then, nanobots (robots the size of human blood cells or smaller, built with key features at the multi-nanometer—billionth of a meter—scale) will provide fully immersive, totally convincing virtual reality in the following way.

The nanobots take up positions in close physical proximity to every interneuronal connection coming from all of our senses (e.g., eyes, ears, skin). We already have the technology for electronic devices to communicate with neurons in both directions that requires no direct physical contact with the neurons.

For example, scientists at the Max Planck Institute have developed “neuron transistors” that can detect the firing of a nearby neuron, or alternatively, can cause a nearby neuron to fire, or suppress it from firing. This amounts to two-way communication between neurons and the electronic-based neuron transistors. The Institute scientists demonstrated their invention by controlling the movement of a living leech from their computer.

Nanobot-based virtual reality is not yet feasible in size and cost, but we have made a good start in understanding the encoding of sensory signals. For example, Lloyd Watts and his colleagues have developed a detailed model of the sensory coding and transformations that take place in the auditory processing regions of the human brain. We are at an even earlier stage in understanding the complex feedback loops and neural pathways in the visual system.

When we want to experience real reality, the nanobots just stay in position (in the capillaries) and do nothing. If we want to enter virtual reality, they suppress all of the inputs coming from the real senses, and replace them with the signals that would be appropriate for the virtual environment. You (i.e., your brain) could decide to cause your muscles and limbs to move as you normally would, but the nanobots again intercept these interneuronal signals, suppress your real limbs from moving, and instead cause your virtual limbs to move and provide the appropriate movement and reorientation in the virtual environment.

The Web will provide a panoply of virtual environments to explore. Some will be recreations of real places, others will be fanciful environments that have no “real” counterpart. Some indeed would be impossible in the physical world (perhaps because they violate the laws of physics). We will be able to “go” to these virtual environments by ourselves, or we will meet other people there, both real and virtual people.

By 2030, going to a web site will mean entering a full-immersion virtual-reality environment. In addition to encompassing all of the senses, these shared environments could include emotional overlays, since the nanobots will be capable of triggering the neurological correlates of emotions, sexual pleasure, and other derivatives of our sensory experience and mental reactions.

The portrayal of virtual reality in The Matrix is a bit more primitive than this. The use of bioports in the back of the neck reflects a lack of imagination on how full-immersion virtual reality from within the nervous system is likely to work. The idea of a plug is an old fashioned notion that we are already starting to get away from in our machines. By the time the Matrix is feasible, we will have far more elegant means of wirelessly accessing the human nervous system from within.

Virtual reality, as conceived of in The Matrix, is evil. Morpheus describes the Matrix as “a computer-generated dream world to keep us under control.” We saw similar portrayals of the Internet prior to its creation. Early fiction, such as the novels 1984 and Brave New World, portrayed the worldwide communications network as essentially evil, a means for totalitarian control of humankind. Now that we actually have a worldwide communications network, we can see that the reality has turned out rather different.

Like any technology, the Internet empowers both our creative and destructive inclinations, but overall the advent of worldwide decentralized electronic communication has been a powerful democratizing force. It was not Yeltsin standing on a tank that overthrew Soviet control during the 1991 revolt after the coup against Gorbachev. Rather it was the early forms of electronic messaging (such as fax machines and an early form of email based on teletype machines), forerunners to the Internet, that prevented the totalitarian forces from keeping the public in the dark. We can trace the movement towards democracy throughout the 1990s to the emergence of this worldwide communications network.

In my view, the advent of virtual reality will reflect a similar amplification of creative human communication. We have one form of virtual reality already. It’s called the telephone, and it is a way to “be together” even if physically apart, at least as far as the auditory sense is concerned. When we add all of the other senses to virtual reality, it will be a similar strengthening of human communication.

A Dystopian, Luddite Perspective

(credit: Warner Bros. Pictures)

The dystopian, Luddite perspective of the Wachowski brothers can be seen in its view of the birth of artificial intelligence as the source of all evil. In one of Morpheus’ “sermons,” he tells Neo (Keanu Reeves) that “in the early 21st century, all of mankind united and marveled at our magnificence as we gave birth to AI [artificial intelligence], a singular construction that spawned an entire race of machines.” Morpheus goes on to explain how this singular construction became a runaway phenomenon as it reproduced itself and ultimately enslaved humankind.

The movie celebrates those humans who choose to be completely unaltered by technology, even spurning the bioport. Incidentally, in my book The Age of Spiritual Machines2, I refer to such people as MOSHs (Mostly Original Substrate Humans). The movie’s position reflects a growing sentiment in today’s world to maintain a distinct separation of the natural- and human-created worlds. The reality, however, is that these worlds are rapidly merging. We already have a variety of neural implants that are repairing human brains afflicted by disease or disability, for example, an FDA-approved neural implant that replaces the region of neurons destroyed by Parkinson’s Disease, cochlear implants for the deaf, and emerging retinal implants for the blind.

My view is that the prospect of “strong AI” (AI at or beyond human intelligence) will serve to amplify human civilization much the same way that our technology does today. As a society, we routinely accomplish intellectual achievements that would be impossible without the level of computer intelligence we already have. Ultimately, we will merge our own biological intelligence with our own creations as a way of continuing the exponential expansion of human knowledge and creative potential.

However, I do not completely reject the specter of AI turning on its creators, as portrayed in the Matrix. It is a possible downside scenario, what Nick Bostrom calls an “existential risk3.” There has been a great deal of discussion recently about future dangers that Bill Joy4,5,6 has labeled “GNR” (genetics, nanotechnology, and robotics). The “G” peril, which is the destructive potential of bioengineered pathogens, is the danger we are now struggling with. Our first defense from “G” will need to be more “G,” for example bioengineered antiviral medications.

Ultimately, we will provide a true defense from “G” by using “N,” nanoengineered entities that are smaller, faster, and smarter than mere biological entities. However, the advent of fully realized nanotechnology will introduce a new set of profound dangers. Our defense from “N” will also initially be created from defensive nanotechnology, but the ultimate defense from “N” will be “R,” small robots that are intelligent at human levels and beyond, in other words, strong AI. But then the question arises: what will defend us from malevolent AI? The only possible answer is “friendly AI7.”

Unfortunately there is nothing we can do today to assure that AI will be friendly. Based on this, some observers such as Bill Joy call for us to relinquish the pursuit of these technologies. The reality, however, is that such relinquishment is not possible without instituting a totalitarian government that bans all of technology (which is the essential theme of Brave New World). It’s the same story with human intelligence. The only defense we have had throughout human history from malevolent human intelligence is for more enlightened human intelligence to confront its more deviant forms. Our imperfect record in accomplishing this is at least one key reason that there is so much concern with GNR.


There are problems and inconsistencies with the conception of virtual reality in The Matrix. The most obvious is the absurd notion of the machines keeping all of the humans alive to use them as energy sources. Humans are capable of many things, but being an effective battery is not one of them. Our biological bodies do not generate any significant levels of useful energy. Moreover, we require more energy than we produce. Morpheus acknowledges that the machines needed more than just humans for energy when he tells Neo “25,000 BTU of body heat combined with a form of fusion [provide] the machines all the energy they need.” But if the machines have fusion technology, then they clearly would not need humans.

In his chapter “Glitches in The Matrix. . ..And How to Fix Them,” (also in the book Taking the Red Pill) Peter Lloyd surmises that “the machines are harnessing the spare brainpower of the human race as a colossal distributed processor for controlling the nuclear fusion reactions.” This is a creative fix, but equally unfounded. Human brains are not an attractive building block for a distributed processor. The electrochemical signaling pathway in the human brain is extremely slow: about 200 calculations per second, which is at least 10 million times slower than today’s electronics. The architecture of our brains is relatively fixed and unsuitable for harnessing into a parallel network. Moreover, the human brains in the story are presumably being actively used to guide the human lives in the virtual Matrix world. If the AI’s in the matrix are smart enough to create fusion power, they would not need a network of human brains to control it.

There are other absurdities, such as the requirement to find an old fashioned “land line” (telephone) to exit the Matrix. Lloyd provides a creative rationalization for this also (the land lines have fixed network addresses in the Matrix operating system that the Nebuchadnezzar’s computer can access), but given the inherent flexibility in a virtual reality environment, it is clear that the reason for this requirement has more to do with the Wachowski brothers’ desire to celebrate old-fashioned technology as embodying human values.

There are many arbitrary rules and limitations in the Matrix that don’t make sense. Why bother fighting the agents at all (other than for the obvious “Kung Fu” cinematic reasons) when they cannot be destroyed? Why not just run away, or in the new movie, fly away?

Another attractive feature of the original Matrix movie was its philosophical musings, albeit a hodge podge of metaphorical allusions. There’s Neo as the Christian Messiah who returns to deliver humanity from evil. There’s the Buddhist notion that everything we see, hear and touch is an illusion. Of course, one might point out that the true reality in the Matrix is a lot grimier and grimmer than the Buddhist idea of enlightenment. We hear the martial arts philosophy (borrowed from Star Wars) of freeing yourself from rational thinking to let one’s inner warrior emerge.

Then there is the green philosophy of humanity as inimical to its natural environment. This view is actually articulated by Agent Smith, who describes humanity as “a virus that does not maintain equilibrium with its environment.” Most of all, we are treated to a Luddite celebration of pure humanity, along with the 19th century and early 20th century technologies of rotary phones and old gear boxes, which presumably reflect human purity.

My overall reaction to this conception is that the human rebels will need advanced technology at least comparable to that of the evil AI’s if they are to prevail. The film’s notion that advanced technology is inherently evil is misplaced. Technology is power, and whoever has its power will prevail. The “machines” as portrayed in the Matrix do appear to be malevolent, but the rebels are not likely to survive with their old fashioned gear boxes. However, with the script in the hands of the Wachowski brothers, we can assume that the Rebels will nonetheless have a fighting chance.

Matrix Reloaded

Which brings us to The Matrix Reloaded. Like Star Wars and Alien, also breakout movies in their time, this sequel loses the elegance, style, and originality of the original. The new film wallows in endless battle and chase scenes. Moreover, these confrontations lack any real dramatic tension. The producers are constantly changing the rules of engagement so one never thinks, “how are they going to get out of this jam?” One has only the sense that a particular character will continue if the Wachowski brothers want that character around for their own cinematic reasons. They are continually coming up with arbitrary new rules and exceptions to the rules.

Much of the fighting makes little sense. Given that the evil twin apparitions are able to magically transport themselves directly into Trinity’s vehicle, and Neo is able to fly like Superman, the hand to hand combat and use of knives and poles lacks even the logic of a video game. For that matter, the two scenes of Neo battling the 100 Smiths looked exactly like a video game. Like so much of the action, these scenes seemed superfluous and time wasting. Smith is no longer an agent, and plays no clear role in the story, to the extent that there was any attempt to tell a coherent story.

About two thirds of the way through this sequel, I turned to my companion and asked “whatever happened to the plot, wasn’t there something about 250,000 Sentinels attacking Zion, the last human city?” My companion responded that it seemed that “plot” was a four letter word to the movie makers. Of course, there wasn’t much time for plot development, given all of the devotion to chasing and fighting, not to mention an equally drawn out gratuitous sex scene (well, at least there is one reason to go see this film).

If plot development was weak, character development was worse. Many reviewers of the first Matrix movie noted that Keanu Reeves could not act. But his acting in the first Matrix is downright Shakespearian compared to the sequel. At least in the original, there was some portrayal of Neo’s struggle with his discovery of the true nature of the Matrix, of his grappling with his role as “the one,” and his coming-of-age tutorials.

In Reloaded, Reeves acts like he’s had a lobotomy, sleepwalking or rather sleep-flying through the whole movie. His lover, Trinity (Carrie-Anne Moss), is equally distant and unemotional, acting like a frustrated librarian with a black belt. Morpheus was appealing in the first movie with his earnest confidence and wisdom. In the new film, he’s like a preacher on morphine, which quickly gets tiresome.

The philosophical dialogues, which were refreshing in the original, sound like late-night college banter in the sequel. As for the technology of the movie itself, there was really nothing special here. They did trash about 100 General Motors cars on a multi-million dollar roadway built especially for the movie, but aside from bigger explosions, the effects were the opposite of riveting. Some of the organic backgrounds of the city of Zion were attractive, but they were all illustrated, and lacked the genuine warmth of a real human environment, which the movie professes to celebrate. The Wachowski brothers’ notion of human celebration is also a bit weird as portrayed in the retro rave festivities on Zion to honor the return of the rebels.

Although I take issue with the strong Luddite posture of the original Matrix, I recognized its importance as a forceful and stylish articulation in cinematic terms of salient 21st century issues. Unfortunately, the sequel throws away this metaphysical mantle.

1. Glenn Yeffeth, Ed., Taking the Red Pill: Science, Philosophy and Religion in The Matrix (Ben Bella Books, April 2003)

2. Ray Kurzweil, The Age of Spiritual Machines, Penguin USA, 1999

3. Nick Bostrom, “Existential Risks: Analyzing Human Extinction Scenario and Related Hazards,” 2001

4. Bill Joy, “Why the future doesn’t need us,” Wired, April 2000

5. Ray Kurzweil, “In Response to,” July 25, 2001

6. Ray Kurzweil, “Testimony of Ray Kurzweil on the Societal Implications of Nanotechnology,”, April 9, 2003

7. Eliezer S. Yudkowsky, “What is Friendly AI?,”, May 3, 2001