The future of moral machines

December 28, 2011

Asimov's I, Robot

The prospect of machines capable of following moral principles, let alone understanding them, seems as remote today as the word “robot” is old, Colin Allen, co-author of the book Moral Machines, suggests in New York Times Opinionator.

“I am skeptical about the Singularity, and even if ‘artificial intelligence’ is not an oxymoron, ‘friendly AI’ will require considerable scientific progress on a number of fronts, he says. “Dynamical systems theory, network science, statistical learning theory, developmental psychobiology, and molecular neuroscience all challenge some foundational assumptions of AI, and the last 50 years of cognitive science more generally, and fully human-level moral agency, and all the responsibilities that come with it, requires developments in artificial intelligence or artificial life that remain, for now, in the domain of science fiction.”

However, “far from being an exercise in science fiction, serious engagement with the project of designing artificial moral agents has the potential to revolutionize moral philosophy in the same way that philosophers’ engagement with science continuously revolutionizes human self-understanding…Even if success in building artificial moral agents will be hard to gauge, the effort may help to forestall inflexible, ethically-blind technologies from propagating.”