Eliezer S. Yudkowsky

July 11, 2009

Eliezer YudkowskyEliezer Yudkowsky is a Research Fellow at the Singularity Institute for Artificial Intelligence. Yudkowsky’s professional work focuses on Artificial Intelligence designs which enable self-understanding, self-modification, and recursive self-improvement (“seed AI”); and on Artificial Intelligence architectures that enable the creation of sustainable and improveable benevolence (“Friendly AI”). He has spoken on these two topics at venues ranging from private corporations to Foresight gatherings.

He created the Friendly AI approach to AGI, which emphasizes the importance of the structure of an ethical optimization process and its supergoal, in contrast to the common trend of seeking the right fixed enumeration of ethical rules a moral agent should follow. In 2001, he published the first technical analysis of motivationally stable goal systems, with his book-length Creating Friendly AI: The Analysis and Design of Benevolent Goal Architectures. In 2002, he wrote “Levels of Organization in General Intelligence,” a paper on the evolutionary psychology of human general intelligence, published in the edited volume Artificial General Intelligence (Springer, 2006). He has two papers in the edited volume Global Catastrophic Risks (Oxford, 2008), “Cognitive Biases Potentially Affecting Judgment of Global Risks” and “AI as a Positive and Negative Factor in Global Risk.”

Links:
http://singinst.org/
See essays by this author:
What is Friendly AI?
See selected books by this author:
The Hanson-Yudkowsky AI-Foom Debate