on stage | Future of Life Institute: Ray Kurzweil at Beneficial Artificial Intelligence event

With videos of top conversations on computing futures.
February 8, 2017

Dear readers,

I participated in the well organized Future of Life Institute event called Beneficial Artificial Intelligence • 2017 — exploring how we can develop advanced future tech to benefit humanity and avoid risks.

The event gathered many top technologists, policy makers, and executives. I gave several talks, which you can view below. At the event, I also participated in forming the Asilomar AI Principles: 23 guidelines to make sure the ongoing development of artificial intelligence remains helpful to the world and safe.

Many famous leaders at the Future of Life event drafted and signed the principles, engaging these deep ideas, including:

  • Stephen Hawking, PhD — physicist at Cambridge University
  • Demis Hassabis, PhD — co-founder & CEO of Deep Mind
  • Mustafa Suleyman — co-founder of Deep Mind
  • Yann LeCun, PhD — Director of AI at Facebook
  • Peter Norvig, PhD — Director of Research at Google
  • Anthony Romero — Executive Director of American Civil Liberties Union
  • Elon Musk — CEO of Space X & Tesla

So far, more than 2,700 people endorsed the principles we created by signing the report. Here is the full list of experts who signed. Much more background on this successful event, below. Videos of my keynote talk and panel on creating superintelligent, human level artificial intelligence, also posted below.

Enjoy the conversation!
Ray Kurzweil

on the web: 
Future of Life Institute | main
Future of Life Institute | Beneficial Artificial Intelligence • 2017
Future of Life Institute | full list of event participants

Future of Life Institute | Asilomar AI Principles • 2017
Future of Life Institute | full list of Asilomar AI Principles signatories
Future of Life Institute | principled AI discussion in Asilomar

Future of Life Institute | YouTube channel


1.  Future of Life Institute | mission
Founding ideas & goals.

The Future of Life Institute was founded to catalyze and support research and projects for safe-guarding life and developing our optimistic future — positive ways for humanity to steer new tech and challenges.

With technology improving at an accelerating pace, the institute works to ensure tomorrow’s sciences are beneficial for humanity.

With powerful capabilities like nuclear weapons, bio-technology and artificial intelligence, planning ahead is a better strategy than learning from mistakes. Future of Life Institute is focused on reducing risks.

on the web:
Wikipedia | Future of Life Institute
Wikipedia | artificial intelligence
Wikipedia | existential risk from artificial intelligence


2.  Future of Life Institute | Beneficial Artificial Intelligence • 2017
Event featuring top researchers & leaders.

Future of Life Institute’s recent event Beneficial Artificial Intelligence • 2017 brought together top artificial intelligence (AI) researchers from academia & industry — plus thought leaders in economics, law, ethics, and philosophy. Attendees from many AI fields hashed out opportunities and challenges related to the future of AI. They explored steps we can take to ensure that the technology is beneficial.


3.  Future of Life Institute | Asilomar AI Principles • 2017
Guidelines written by the event attendees.

Artificial intelligence tools are used every day to power the world. Its continued development, guided by the Asilomar principles, offer a framework to keep AI safe in the centuries ahead. Event attendees created the principles during the conference and discuss the details in interviews available on the Future of Life’s website.


video collection | Future of Life Institute
Presentations from the event Beneficial Artificial Intelligence • 2017 — complete video set


video | talk by Ray Kurzweil
Creating human level AI, how and when.

about | Ray Kurzweil explores how and when we might create human level artificial intelligence at the Beneficial Artificial Intelligence conference organized by the Future of Life Institute.


video | panel discussion
Ray Kurzweil & more thought leaders explore the topic of superintelligence, fact or fiction.

about | Panel discussion on what likely outcomes might be if technologists succeed in building human level artificial intelligence. What would we like to happen? At the Beneficial Artificial Intelligence conference organized by the Future of Life Institute.

moderator: Max Tegmark

panelists:
Nick Bostrom
David Chalmers
Sam Harris
Demis Hassabis
Ray Kurzweil
Elon Musk
Stuart Russell
Bart Selman
Jaan Tallinn


talk | AI & the Economy — by Eric Brynjolfsson, PhD
1. Ray Kurzweil comment at — 20 minutes, 30 seconds

transcript | comment from Ray Kurzweil — 20 minutes, 30 seconds

“So one thing you didn’t mention is the 50% deflation rate inherent in information technology. So when this middle worker whose wages are supposedly stagnant buys a $300 smartphone,  she’s getting a trillion dollars of computation circa 1965 — and you say well she can’t really benefit from all that computation directly, but she’s getting millions of dollars of free services, free encyclopedia, etc.

“So then the challenge is ok it’s true for this really interesting world of information products, but you can’t eat information technology, you can’t live in it, you can’t wear it. The point I make is that all that’s going to change — we’re going to print out clothing in the 2020s with 3D printers, manufacture modules you can snap together and build a house inexpensively, we’re going to have vertical agriculture which will automate food production, make it very inexpensive.

“So the value of a dollar is actually going up. But we don’t see that because we take these gains in the numerator and the denominator, so some say where’s the productivity gains?” — Ray Kurzweil


talk | Creating Human Level AI — by Yoshua Bengio, PhD
2.  Ray Kurzweil comment at — 24 minutes, 40 seconds

transcript | comment from Ray Kurzweil — 24 minutes, 40 seconds

“So my view, and I’ll talk more about this in my talk tomorrow, is that the neocortex is organized hierarchically, it’s actually a hierarchy of sequential models. And these sequential models are not LSTMs *. They don’t deal with long term dependencies, that’s dealt with by the hierarchy, and that deals with compositionality.

“And also deals with this sort of intricate interactions at different levels of abstraction that we find that DNNs * can do that incidentally, to some extent. But I will talk about some of the work we’re doing where we can significantly outperform LSTMs in terms of this type of abstraction. But I think that ultimately a lot of the problems we’re seeing will be solved by using a hierarchy of sequential models.” — Ray Kurzweil

transcript | response from Yoshua Bengio, PhD

“I agree. I think having a hierarchy in many different ways: so in space, in time, in abstraction — all of these things are important. And we’ve been pushing these boundaries for several years. And I think much more needs to be done still, thanks.” — Yoshua Bengio, PhD

* LSTM is long short term memory
* DNN is deep neural network

Wikipedia | long short term memory
Wikipedia | deep neural network


panel | Implications of AI for the Economy & Society
3. Ray Kurzweil comment at — 17 minutes, 55 seconds

transcript | comment from Ray Kurzweil — 17 minutes, 55 seconds

“I have a comment on Jeff Sach’s discussion on nastiness. So, we have a lot of people in the world, so even if a small number of nastiness it’s going to be a lot of nastiness — I think you implied that it’s getting worse — I have an observation, I’d be interested in your response. I think our information about nastiness is getting exponentially better so there’s some nastiness 10,000 miles away we not only hear about it, we experience it — there could be a battle that wiped out the next village 100 years ago, and we wouldn’t even hear about it. All the measures of the opposite of nastiness are getting dramatically better.

“Even Steven Pinker documents an exponential decline in violence. There was 1 democracy in the world 2 centuries ago, and 5 two centuries ago — we can argue how many there are today, but it’s a lot more than 5. Poverty has been cut according to the World Bank, 50% world-wide in the last 20 years, education is up, etc.

“Is it your view that nastiness is getting worse? My view is that, for example, social media is actually spreading a universal set of ethical beliefs, we hear very dramatically when there’s exceptions to that.” — Ray Kurzweil

transcript | response from Jeffrey Sachs, PhD

“Thanks a lot. I didn’t say it’s getting worse, I just said it’s a nasty world — and here comes a very powerful technology, extraordinary in what it can do. The lesson of every technology is it can be deployed for good or bad. And this is one of the most powerful imaginable. It can be deployed for cyberwar, creating mass surveillance of society. Or an improvement of quality of life.

“I was advising don’t only make a list of desirables, understand how tough this world is, and how these technologies can be very badly deployed. Privacy has plummeted in our society, government surveillance has soared. Cyberwarfare is a reality — we’re so blasé about this. My point was it only takes one war to change Steven Pinker’s trends. In 1913 all this techno-optimism was real and a war came, that 102 years later historians still can’t explain a deeper cause for, the first World War was mindless. We’re not out of that human nature. I’m only advising that it’s not just a list of goods and bads but a real advocacy by this community.” — Jeffrey Sachs, PhD

Wikipedia | Steven Pinker, PhD


panel | Policy & Governance
4. Ray Kurzweil comment at — 25 minutes, 50 seconds

video coming soon | link

transcript | “I enjoyed your talk Heather, I appreciate and support your passion because I think it’s a passionately important issue. I’m also a fan of Hannah Arendt. I love her statement ‘banality of evil,’ which says a lot.

“You addressed a lot of these issues, but I do want to return to the dual use issue. So, an Amazon drone that delivers medicine to a hospital in Africa could easily be re-deployed, or the underlying technology could be re-deployed to deliver a bomb in the same way. So, one of the arguments for banning it is we did that with chemical weapons, but we can get through the day without anthrax or small pox.  These are dual use weapons.

“I was actually on the U.S. Army’s science advisory group about 15 years ago, my issue was not artificial intelligence, it was protection against bio-terrorism, but they were developing autonomous weapons back then and, as you pointed out, the rationale was well, ok the humans are going to be in the loop, the humans will decide the strategy and the weapons will just do the tactics.

“But that’s a very loose line, a tactic could be take this hill, or take this town. So one person’s or one AI’s tactics is another’s strategy. But my main question is the horse has been out of the barn on this issue for 15-20 years, so what is a reasonable goal at this point given that it’s already — I mean, all the militaries in the world have been pursuing this for quite some time.” — Ray Kurzweil

Wikipedia | dual use technology
Wikipedia | Hannah Arendt

 



Future of Life Institute | Beneficial Artificial Intelligence • 2017
Full event agenda with presentation videos.


DAY 1 | all videos — January 5, 2017

reception: welcome


DAY 2 | all videos — January 6, 2017

theme | economics
How we grow prosperity through automation without leaving people lacking income or purpose?

opening: keynotes on AI, economics & law
panel: How is AI automating & augmenting work?
panel: What are the implications of AI for the economy & society?
fireside chat: What makes people happy?


DAY 3 | all videos — January 7, 2017

theme | creating human level AI
Will it happen, when and how? What key remaining obstacles can be identified?
How can we make future AI systems more robust than today’s, without crashing, malfunctioning or getting hacked?

talks & panel: on topic

theme | superintelligence
Science or fiction? If human level general AI is developed, then what are likely outcomes?
What can we do now to maximize the probability of a positive outcome?

* panel: If we build human level AI, what are likely outcomes? What would we like to happen?


DAY 4 | all videos — January 8, 2017

theme | law, policy & ethics
How can we update legal systems, international treaties and algorithms to be more fair, ethical & efficient?
And to keep pace with AI?

talks: on topic
panel: policy & governance
panel: AI & the law
panel: AI & ethics


on the web:
United States White House | “Artificial Intelligence, Automation & The Economy” • report
Robotics Trends | “Asilomar AI Principles, 23 tips for making AI safe”
Geek Wire | “Should we be hooking up AI to our brains, new Asilomar principles urge caution”