letters from Ray | Physics pioneer David Deutsch, PhD says human level artificial intelligence is possible

January 1, 2000


Dear readers,

This is a question from my family friend Jacob Sparks. He’s working on his PhD in philosophy. His dissertation is on moral epistemology — the philosophical basis of moral systems.

Considering the future of artificial intelligence and its capacity to reflect human ethics, decision making and evaluating priorities is a key focus for theorists who plan for computers to evolve our abilities.

This discussion is interesting because it’s talking about subtle human qualities like morality, judgment, and even emotion — which play an important role in everyday human life.

I hope you enjoy this conversation!

Ray Kurzweil


Dear Ray,

I recently saw this essay in Aeon magazine called “How close are we to creating artificial intelligence?” I’m curious to hear what you think about it. It’s written by David Deutsch, PhD who is a renowned physicist at the University of Oxford. His wrote the popular physics books The Beginning of Infinity and The Fabric of Reality.

He pioneered the field of quantum computation by formulating a description for a quantum Turing machine, and an algorithm designed to run on a quantum computer. Deutsch is a proponent of the many worlds interpretation of quantum mechanics.

Jacob Sparks


on the web | essentials

Aeon | main
Aeon | How close are we to creating artificial intelligence?



excerpt | from the essay

Aeon | How close are we to creating artificial intelligence?
by David Deutsch, PhD 

“I’m convinced the problem of developing artificial general intelligence — called AGI — is a matter of philosophy, not computer science or neurophysiology. And that the philosophical progress essential to its future integration is requisite for developing it in the first place.

“Lack of AGI progress is due to a log jam of misconceptions. Without Popperian epistemology, we can’t begin to guess what detailed functionality must be achieved to make an artificial general intelligence.

“And Popperian epistemology is not widely known, and not understood well enough to be applied. Thinking of AGI as a machine for translating experiences, rewards and punishments into ideas — or worse, just into behaviors — is like trying to cure disease by balancing bodily humors. Futile because it’s rooted in an archaic, wildly mistaken world view.

“Without understanding that the functionality of an AGI is qualitatively different from that of any other kind of computer program, we’re working in an entirely different field. If we work toward programs whose thinking is constitutionally incapable of violating constraints, we’re trying to engineer away the defining attribute of an intelligent being, of a person — namely creativity.

“Clearing this log jam alone won’t provide the answer. But the answer can’t be all that difficult. Another consequence of understanding that the target ability is qualitatively different is — since humans have it and apes don’t — the information for how to achieve it must be encoded in a tiny number of differences between DNA of humans and chimpanzees.

“So I can agree with the AGI is imminent camp. It’s plausible that just a single idea stands between us and breakthrough. But it will have to be one of the best ideas ever.”


set 1 — reading notes:
Wikipedia | Karl Popper, PhD
Wikipedia | epistemology
Wikipedia | philosophy of science
Wikipedia | critical rationalism


set 2 — reading notes:
Wikipedia | artificial intelligence
Wikipedia | artificial general intelligence
Wikipedia | intelligent agent
Wikipedia | computational theory of mind



test test


books | by David Deutsch, PhD