Artificial Superintelligence: A Futuristic Approach
July 1, 2013
Many philosophers, futurologists and artificial intelligence researchers have conjectured that in the next 20 to 200 years a machine capable of at least human level performance on all tasks will be developed.
Since such a machine would among other things be capable of designing the next generation of even smarter intelligent machines, it is generally assumed that an intelligence explosion will take place shortly after such a technological self-improvement cycle begins.
While specific predictions regarding the consequences of such an intelligence singularity are varied from potential economic hardship to the complete extinction of the humankind, many of the involved researchers agree that the issue is of utmost importance and needs to be seriously addressed. This book, “Artificial Superintelligence: A Futuristic Approach,” will directly address this issue and consolidate research aimed at making sure that emerging superintelligence is beneficial to humanity.
Writing Sample: Leakproofing Singularity
What others said about: Leakproofing Singularity
“Yampolskiy’s excellent article gives a thorough analysis of issues pertaining to the “leakproof singularity”: confining an AI system, at least in the early stages, so that it cannot “escape”. It is especially interesting to see the antecedents of this issue in Lampson’s 1973 confinement problem in computer security. I do not have much to add to Yampolskiy’s analysis.”
David J. Chalmers, Professor of Philosophy, New York University
“This is great! I like the way you
- introduce the state of the art in related security for ordinary computer systems
- review the academic literature
- review the discussion-group posts which, though obscure, make innovative and essential points
- enumerate possible failure scenarios, and suggest solutions
- while pointing out clearly that all solutions can fail in the face of superintelligence.
This is exactly the sort of article the community needs.”
Joshua Fox, Research Associate at Singularity Institute
“AI researcher Roman Yampolskiy’s article, ‘Leakproofing the Singularity: Artificial Intelligence Confinement Problem’, provides us with a detailed and well-reasoned analysis of … ways of externally constraining the AI design that might lead towards a singularity, especially constraining such AI to a virtual world from which it cannot leak into the real world.”
Uziel Awret, Editor of Special Issue on Singularity of Journal of Consciousness Studies
“The connection back to Lampson is very interesting and apt.”
Vernor Vinge, Hugo Award-winning author and Professor of Mathematics (retired)
Tentative List of Chapters:
1) Introduction to Artificial Superintelligence.
2) AI-Completeness – the Problem Domain of Superintelligent Machines.
3) The Space of Mind Designs and the Human Mental Model.
4) How to Prove that You Invented Superintelligence So No One Else Can Steal It.
5) Wireheading, Addiction and Mental Illness in Machines.
6) On the Limits of Recursively Self-Improving Artificially Intelligent Systems.
7) Singularity Paradox and What to Do About It.
8) Superintelligence Safety Engineering.
9) Artificial Intelligence Confinement Problem (and Solution).
10) Controlling Impact of Future Super AI.
11) Efficiency Theory: a Unifying Theory for Information, Computation and Intelligence.
12) Unverifiability: Why Software Can’t Ever be Completely Bug Free.
13) Artimetrics: Behavioral and Visual Identity Management of Artificial Agents.
14) Wisdom of Artificial Crowds: Simulating Democracy and Intelligence of Crowds in Cyberspace.
January — May 2013: preparing fundraising campaign.
May – July 2013 (NOW): crowd funding campaign and continuing research.
July – October 2013: writing of the book.
October — December 2013: re-writing, editing, revising, proofreading, formatting, finalizing cover design, publishing.
Early 2014: shipping!!!
About the Author
Dr. Roman Yampolskiy conducts research in Artificial Intelligence Safety and Technological Singularity. An alumnus of Singularity University (GSP2012) and a visiting fellow / research advisor of the Singularity Institute (MIRI), Dr. Yampolskiy has contributed papers to the first book on Singularity (Singularity Hypotheses, Springer 2012), first journal issue devoted to Singularity (Journal of Consciousness Studies, 2012) and the first conference devoted to safe Super-Intelligent systems (AGI Safety, 2012).
Roman V. Yampolskiy holds a PhD degree from the Department of Computer Science and Engineering at the University at Buffalo. There, he was a recipient of a four year NSF (National Science Foundation) IGERT (Integrative Graduate Education and Research Traineeship) fellowship. Before beginning his doctoral studies, Dr. Yampolskiy received a BS/MS (High Honors) combined degree in Computer Science from Rochester Institute of Technology, NY, USA.
After completing his PhD dissertation, Dr. Yampolskiy held a position of an Affiliate Academic at the Center for Advanced Spatial Analysis, University of London, College of London. In 2008 Dr. Yampolskiy accepted an assistant professor position at the Speed School of Engineering, University of Louisville, KY. He had previously conducted research at the Laboratory for Applied Computing (currently known as Center for Advancing the Study of Infrastructure) at the Rochester Institute of Technology and at the Center for Unified Biometrics and Sensors at the University at Buffalo.
Dr. Yampolskiy is an author of over 100 publications including multiple journal articles and books. His research has been cited by numerous scientists and profiled in popular magazines both American and foreign (New Scientist, Poker Magazine, Science World Magazine), dozens of websites (BBC, MSNBC, Yahoo! News) and on radio (German National Radio, Alex Jones Show). Reports about his work have attracted international attention and have been translated into many languages including Czech, Danish, Dutch, French, German, Hungarian, Italian, Polish, Romanian, and Spanish.
How the money will be spent?
The money is mostly needed to pay for the publication, marketing and distribution costs as well as the costs of editing, and proofreading the book. Some funds will also go to acquire copyrighted materials such as images for the cover and for ongoing research expenses. Additionally, Dr. Yampolskiy works under a contract with the University of Louisville which doesn’t include June and July. A portion of the raised funds will be used to feed Dr. Yampolskiy during that time as he will be very hungry from all that writing.
What has already been done?
Most of the research has been completed. You can never be done consulting with the experts, but I have already had a chance to exchange ideas with the world’s best scientists and philosophers. Some drafts have been written for individual chapters. Professionals have been recruited for cover design and proofreading.
Who designed this awesome book cover?
A friend and a former classmate Svetlana Dolinskiy is responsible for the design of the front cover.
I found a spelling error what should I do?
Call 911! No wait, just email me, I will fix it and thank you.
Have you published any other books?
Yes, this would be my 7th book. You can find my other books for sale on Amazon.com.
Do you have links to the media coverage about your research?
Yes, you can find them linked from my homepage. Even before this funding campaign got started I was fortune to have a lot of interest in my research from popular media.
I am with the media and I would like to interview you about your research, what is the best way to get in touch with you?
My email and phone numbers are listed on my homepage. firstname.lastname@example.org
Do you have any other videos of you presenting your research?
Yes, I have a few available online: my talk at AGI conference in Oxford, and my Ignite presentation at Singularity University.
This project is undertaken as a personal initiative and I am not acting as a representative of any organization, group, institution, company or future superintelligence. I am (Roman Yampolskiy) solely responsible for this fundraising campaign including proposed work, expressed ideas or opinions and promised deliverables. Neither University of Louisville nor any other organization or person should be held liable or assumed to share said opinions, ideas or goals or any legal or financial responsibility to the campaign supporters.