When creative machines overtake man
March 31, 2012 by Jürgen Schmidhuber
Machine intelligence is improving rapidly, to the point that the scientist of the future may not even be human! In fact, in more and more fields, learning machines are already outperforming humans. As noted in this transcript of a talk at TEDxLausanne on Jan. 20, 2012, artificial intelligence expert Jürgen Schmidhuber isn’t able to predict the future accurately, but he explains how machines are getting creative, why 40‚000 years of Homo sapiens-dominated history are about to end soon, and how we can try to make the best of what lies ahead.
When I was a boy, I wanted to become a physicist like my hero Einstein until I realized as a teenager the much bigger impact of building a scientist smarter than myself (my colleagues claim that should be easy), letting him do the remaining work.
So I became an artificial intelligence researcher. It’s a good time to be one; it looks like AI will help to make 40,000 years of human-dominated history converge around the year 2040, which I call Omega. Some call it “the Singularity,” but I prefer Omega, because that’s what Teilhard de Chardin called it 100 years ago, and because it sounds so much like “Oh my God.”
Acceleration of key events in human history
Let me show you this pattern of exponential acceleration of the most important events in human history, which started 40,000 years ago with the emergence of Homo Sapiens Sapiens from Africa.
We take a quarter of this time: Omega minus 10,000 years. That’s precisely the next big chapter in the history books: emergence of civilization, agriculture, domestication of animals, first villages.
And we take a quarter of this time: Omega – 2500 years. That’s precisely the Axial Age, as Jaspers called it: major religions founded in India and China and the West (Old Testament); the ancient Greeks lay the foundations of the Western world — formal reasoning, sophisticated machines including steam engines, anatomically perfect sculptures, harmonic music, organized sport, democracy.
And we take a quarter of this time. That’s precisely the next big advance: the Renaissance; beginnings of the scientific revolution; invention of the printing press (often called the most influential invention of the past 1000 years); age of exploration, first through Chinese fleets, then also European explorers such as Columbus, who did not become famous because he was the first to discover America, but because he was the last.
And we take a quarter of this time: Omega – 2 human lifetimes: the late 19th century; emergence of the modern world (many still existing companies were founded back then); invention of combustion engines and cars, cheap electricity, modern chemistry; germ theory of disease revolutionizes medicine; Einstein born; and the biggest event of them all: the onset of the population explosion from 1 billion to soon 10 billion, through fertilizer and then artificial fertilizer.
And we take a quarter of this time: Omega – 1/2 lifetime. That’s the year 2000: the emerging digital nervous system covers the world; WWW and cheap computers and cell phones for everybody; the information processing revolution.
And we take a quarter of this time: Omega – 10 years. Now that’s in the future. Many have learned the hard way that it’s difficult to predict the future, including myself and the guy responsible for my investments.
Nevertheless, a few things can be predicted confidently, such as: soon there will be computers faster than human brains, because computing power will continue to grow by a factor of 100–1000 per decade per Swiss Franc (or a factor of 100 per Dollar, because the Dollar is deflating so rapidly).
Computers that solve problems better than humans
Now you say: OK, computers will be faster than brains, but they lack the general problem-solving software of humans, who apparently can learn to solve all kinds of problems!
But that’s too pessimistic. At the Swiss AI Lab IDSIA in the new millennium we already developed mathematically optimal, learning, universal problem solvers living in unknown environments (more, even more).
That is, at least from a theoretical point of view, blueprints of universal AIs already exist. They are not yet practical for various reasons; but on the other hand we already do have not quite as universal, but very practical brain-inspired artificial neural networks that are learning complex tasks that seemed unfeasible only 10 years ago.
In fact, the recurrent or deep neural nets developed in my lab are currently winning all kinds of international machine learning competitions. For example, they are now the best methods for recognizing connected French handwriting. And also Arabic handwriting. And also Chinese handwriting. Although none of us speaks a word of Arabic or Chinese. And our French is also not so good.
But we don’t have to program these things. They learn from millions of training examples, extracting the regularities, and generalizing on unseen test data. Just a few months ago, our team participated in the traffic sign recognition competition (important for self-driving cars). Many teams around the world participated, but finally ours came in first, and the second best performance was not by another machine learning competitor, but by humans.
A Formal Theory of Fun and Creativity
Now you say: OK, maybe computers will be faster and better pattern recognizers, but they will never be creative! But that’s too pessimistic. In my group at the Swiss AI Lab IDSIA, we developed a Formal Theory of Fun and Creativity that formally explains science & art & music & humor, to the extent that we can begin to build artificial scientists and artists.
Let me explain it in a nutshell. As you are interacting with your environment, you record and encode (e.g., through a neural net) the growing history of sensory data that you create and shape through your actions.
Any discovery (say, through a standard neural net learning algorithm) of a new regularity in the data will make the code more efficient (e.g., less bits or synapses needed, or less time). This efficiency progress can be measured — it’s the wow-effect or fun! A real number.
This number is a reward signal for the separate action-selecting module, which uses a reinforcement learning method to maximize the future expected sum of such rewards or wow-effects. Just like a physicist gets intrinsic reward for creating an experiment leading to observations obeying a previously unpublished physical law that allows for better compressing the data.
Or a composer creating a new but non-random, non-arbitrary melody with novel, unexpected but regular harmonies that also permit wow-effects through progress of the learning data encoder. Or a comedian inventing a novel joke with an unexpected punch line, related to the beginning of the story in an initially unexpected but quickly learnable way that also allows for better compression of the perceived data.
You know, before I came here I thought: this is just another TEDx talk and there won’t be much of an audience, but you are actually a large audience by my standards. The other day I gave a talk and there was just a single person in the audience.
A young lady. I said: Young lady, it’s very embarrassing, but apparently today I am going to give this talk just for you. And she said: OK, but please hurry, I gotta clean up here.
The Formal Theory of Fun and Creativity explains why some of you find that funny. If you didn’t get all of my explanation, look it up on the Web, it’s easy to find.
The emerging robot civilization
Creative machines invent their own self-generated tasks to achieve wow-effects by figuring out how the world works and what can be done within it. Currently, we just have little case studies. But in a few decades, such machines will have more computational power than human brains.
This will have consequences. My kids were born around 2000. The insurance mathematicians say they are expected to see the year 2100, because they are girls.
A substantial fraction of their lives will be spent in a world where the smartest things are not humans, but the artificial brains of an emerging robot civilization, which presumably will spread throughout the solar system and beyond (space is hostile to humans but nice to robots).
This will change everything much more than, say, global warming, etc. But hardly any politician is aware of this development happening before our eyes. Like the water lilies which every day cover twice as much of the pond, but get noticed only a few days before the pond is full.
My final advice: don’t think of us, the humans, versus them, those future über-robots. Instead view yourself, and humankind in general, as a stepping stone (not the last one) on the path of the universe towards more and more unfathomable complexity. Be content with that little role in the grand scheme of things.
I wish to thank the organizers for doing a great job, and for the check, which I am going to spend on my kids. I wish to thank my Mom and my Dad, without whom all of this would not have been possible. I wish to thank my kids, without whom all of this would not have been necessary. And I wish to thank you, my lovely audience, for your patience.
See also: Turing’s enduring importance




Comments (40)
by Scott Mitting
Don’t worry too much. If they offered it as a minor, genetic algorithms would have been my minor at Purdue. I will be running for congress in 2016, but am already participating in local politics. The people that grew up with computers just aren’t old enough yet to be major players in politics, but we get older every day. We’ll be a part of the fold of mainstream politics soon enough.
by Eddy Newman
Dear John and Dan, I just love your bickering, it’s so…, well…, it’s so human. Can’t you just see, in the distant future, as the world watches the first debate between two super-computers when each computer is convinced it has the correct answer and it is trying to convince the other one it is right.. I wonder if people will place bets on which computer will get mad first and just shut it’s self down.
by John Kulp
This statement that computers can only do what they are programmed to do went out about 60 years ago with the invention of Monte Carlo simulations. The output of computers can be as unpredictable as quantum mechanics or multibody mechanics. How? Well, if you have software that reads the thermally-generated random number values available in Intel processors, the output of simulations is unpredictable. Video inputs from the real world would serve just as well or web pages from the Internet. Monte Carlo simulations are an example of fully random primitive operations resulting in emergent behavior, the details of which can not be predicted, although the average (emergent) behavior can be some what. Another example is have a specified network transaction protocol implemented independently by 5 programmers, and set computers running those programs talking to each other. Back in the 1970′s we found emergent behavior in this that was totally not predicted, even in principle, by the programs.
Intelligence is an emergent phenomenon that depends on being embedded in an informational context (perhaps not unlike molecular biology not working unless it is embedded in a cellular conext). The primitive operations are not the interesting thing.
The flawed argument is saying that simple rule statements directly implement intelligence. It was understood decades ago that complex adaptive systems in many layers of organization are required for complex behavior leading to intelligence. Physics is a simple (or not so simple) set of “rules” which implements humans. But so what? The informational structures in brains, etc. are not directly derivable from such physics and any more than they will be derivable from the primitive operations of the hardware implementing machine intelligence.
by Dan Foley
Hi John
Monte Carlo simulations do not address this question in any way. Being programmed is perfectly compatible with being unpredictable. Isn’t this your claim: we can program computers to be unpredictable? A computer is not unpredictable on its own. If we program it to be unpredictable, it is.
Intelligence IS an emergent phenomenon. And yes, the primitive operations are not that interesting. But isn’t this because the primitive operations have nothing to do with intelligence? What do primitive organic operations have to do with intelligence? We aren’t striving to create computers that can replicate the behavior of amoeba, even though amoeba may teach us a great deal about the “informational context” of primitive organic operations. It isn’t until we get to the highest order of organic beings and one in particular, man, that we see the intelligence we are striving to recreate. Yet all our attention is paid to the primitive operations! Since we assume that the key at this level is unpredictability, we are blind when it comes to the crucial transformation; the actual, eventual, emergence of intelligence.
I would suggest there is a much more relevant context than the informational context of the primitive operations, a context specific to the organic being(s) we should be most concerned with, since this is the intelligence by which we will measure our success in creating artificial intelligence. This is the context of our needs as we feel and express them. Will we create computers that feel pain or pleasure? Will we try to create a context in which a machine will fear death, or feel anger, shame, love, joy? Is our emerging experience of these things not crucial to the emergence of our intelligence? Do these needs, feelings and experiences tied to our organic existence not shape or channel, and limit our intelligence? Do they not provide it a goal and end? Can we supply computers with these needs? Will they have to first develop these so that they have the proper context for the emergence of the phenomenon of intelligence? But we don’t consider any of this because we look only at the “primitive operations” on the one hand and the end result, intelligence, on the other. And we forget about these absolutely crucial intermediate steps, which grow out of the context of our needs, presumably because it is beyond our control to shape or create such a context.
Isn’t the hidden presupposition of EVERY claim that we can program machines with HUMAN intelligence that there is a GOD responsible for ours? Demonstrate this God!
http://wfnt.com/why-fruit-fly-testes-matter-and-not-just-to-lady-fruit-flies/
by SpottedMarley
Now THIS is the sort of story I expect to see on this web site. Fascinating
by mr K
we need to build sex machines quick!
by Tom Lane
Here is an excellent example of creativity being accelerated with the assistance of AI. This is obviously not real, however, it offers a great insight into what the possibilites will be for artists and engineers in the (not so distant) future. The AI is nicknamed “Kurzweil” in this…which is amusing.
http://vimeo.com/42895938
by Eddy Newman
I love watching the robots that dance; everything in perfect alinement. But I wonder, are the robots thrilled with the applause? Here lies the difference between us and them: Humans love life. We will gaze at a sunset, smell the sweet morning air and decide to take a stroll or smile when we see a baby kitten. I don’t believe robots would ever find the need to build themselves an amusement park.
I do believe that AI’s will one day write great symphonies, perform ballets and even write music that can bring us to tears; but it will all be done for us, not for themselves.
Someone might say that AI’s could be programed to have emotions, but if these machines really are intelligent, why would they do that? They would already be aware of an “organic machine” that was capable of learning about and appreciating life. They would just find some way to interface with us, because we have the one thing they are incapable of, appreciation.
I believe our next stage may be OI (Organic Intelligence). Who knows, maybe one day we can reprogram our own DNA by thought.
by Peter Simmons
Category mistake to describe any machine as ‘intelligent’. Intelligence is what sentient creatures have, even many hominids. Machines do what they are designed and programmed to do, they will never think creatively, or at all, they will respond as programmed and may at times be able to ‘fool’ a human that they too are human, but I doubt they would fool an intelligent human who was aware it could be a machine so was paying attention. All this talk of computers being more intelligent than humans at some unspecified time in the future is daydreaming by people who don’t understand the difference. Computers will doubtless grow in power and storage capacity, while our memories probably won’t, but since we have computers to remember for us that doesn’t matter. We will ALWAYS be more intelligent than computers as they are not intelligent and never will be. So all this bollocks about ‘the singularity’ is so much cultist hot air. Unless you’re a true believer of course, then you’ll be waiting for the rapture, whoops, sorry, the singularity.
You think yourselves superior because you have faith in the infalibility of science, but there’s little difference from the born again loonies who await the rapture/second coming/paradise like it was fact and not fiction. No end to the ability of humans to waste their lives on a fallacy, in fact, one might say it’s been our history.
by Giulio Prisco
@Peter re “We will ALWAYS be more intelligent than computers as they are not intelligent and never will be.”
So your argument is “because I say so?” I am afraid it is not a good argument, you will have to do a bit better if you want to be taken seriously here.
Re “Machines do what they are designed and programmed to do, they will never think creatively,..”
Another non-argument. Can you explain why you say so, and what is the difference in-principle between an organic and a non-organic brain?
by Dan Foley
In the first place, do computers have problems? In order to solve a problem, must there not be awareness of a problem? When we use a computer to solve a problem, the problem is ours and the computer is a tool. Hammers help us solve problems as well. Is a hammer intelligent?
But computers in robots solve problems they encounter. But how does a computer encounter a problem? AI encounters problems when carrying out directives. We may no longer have to direct in order to solve the problem. But our direction is what led it into the “problem.” Without our directives, or mission, no problem. And how does it solve it? By accessing information. By looking through folders. All computation is based on accessing the 1′s and the 0′s. The outcome is necessary. Every computation of how to solve the problem will result in the same most efficient solution, until the parameters are changed. If you look in the same file folder over and over, you find the same information. (A huge advantage in a tool.)
So creative thinking is a question. You avoid the question, ‘do machines only do what they are designed and programmed to do?’ by changing the terms. What does the difference between an organic and non-organic brain have to do with this? If organic brains give necessary, pre-programmed answers, they would also be incapable of creative thinking, would they not? Is that what organic brains do? Is that what non-organic brains do? Is an organic brain a tool, in the same what that non-organic brains are? If so, who or what uses that tool?
by Todor Arnaudov
Peter, are you an artist or creative, or a researcher or so? I’m all of this and I know that “creativity” is “mechanical”, the ones who think it’s something magical are just not creative or their brain lacks reflective abilities, they don’t understand themselves. If you did, you’d have seen the mechanics behind your thoughts. I.e. many people like you, who fail to understand themselves, believe there is magic there.
Regarding “intelligence” – people like Schmidhuber, myself as one of them, and I share his school of thought, define intelligence before using the word. They don’t talk about some random meaning that a random person walking down the street has believed it should mean.
Intelligence defined formally can be decided as “present” or “not” anywhere, and as defined by Schmidhuber it can be measured in a machine or not.
As of sentient beings, thinking machines could say the same thing to you. You doesn’t have intelligence and you’re not self-aware – you even don’t understand how elementary and mechanic creativity in fact is, you apparently believe your intelligence is “magic”, and besides – you’re so much slower than a computer in so trivial fields. Therefore, you’re neither intelligent, nor you have conscious. Sorry. :)
Humans also are “designed” and “programmed” to do what they do either, it’s all about defining “designed” and “programmed” and the details about. Every molecule or subsystem in human physiology can be assumed as “designed” to do what it does, and if it doesn’t work “correctly” (is not set “correctly” in some sense of “correctly” and some point of view), something breaks down and goes “wrong” in that POV – a neuronal signal, a cell, an organ, a behavior, or the existence of the entire organism ceases.
by Dan Foley
And another thing…
Over at SciForums, Singularity thread, on a machine’s “need” for energy:
Yes, machines need energy to function. But do they need to function? Would a machine struggle to survive, e.g.? Would it fight for energy, if energy were scarce? Right now, we use machines to help us do things that we need or desire to do. The machine needs energy to fulfill this task. But we supply it with the task. The originating source of motion is OUR need.
In “The Singularity is Near,” Kurzweil talks about how complex it is to reverse engineer the brain and then with this knowledge, design a machine that will “understand and respond to emotion,” which is also very complex and will require a vast expansion of computational power. I don’t doubt that such a machine is coming, and soon. But the question is whether or not the machine that can understand and RESPOND to emotion will FEEL emotion? If it doesn’t, then the only reason to make a machine that can understand and respond to emotion will be for the sake of biological human beings, who do FEEL emotion. We understand and respond to emotion with far less computational power than computers BECAUSE we feel them. If a machine does not feel emotion, if a machine cannot be elated or dejected, head-over-heels in love or heart-broken, proud or ashamed, confident or fearful, etc., how can it be said to UNDERSTAND these emotions? I have no doubt we will build machines that will get better and better at calculating how to respond to the emotions it detects human beings experiencing, through voice analysis, brain scanning, etc. But human beings are much more efficient, require FAR LESS computational power to understand and respond to emotion, BECAUSE we FEEL emotions. From an engineering perspective then, it would seem to make sense to engineer machines that feel pleasure and pain, and everything else we feel and need them to respond to, based on the principle: no superfluous complexity. Only as complex as needed to solve the problem. Is it more complex to build a machine that FEELS emotion? Is it even possible? Or, is this a benefit to have intelligence that does not truly experience emotion, pain, pleasure…? We only need machines that respond to emotion in the mean time, until we move beyond them ourselves. Is that it?
by Dan Foley
http://wfnt.com/robots-who-need-robots-are-the-luckiest-robots/
by srgg67
Dan, don’t you find that emotions are just a tool for surviving that was nessesary in respect of evolution? So it’s not quite clear for me why there are so much talks about emotions… If we suppose that AI should serve for rational reasons it seems that emotions (or “pseudoemotions”) must use just for that reason. Or not?
by Dan Foley
Thanks for the question, srrg67. I don’t find that emotions are just a tool for surviving, do you? Have you ever been in love? Would you ever tell your love that you love her or him for the sake of survival? When you are in love it doesn’t feel like just a tool for survival at all, does it? Why couldn’t it be the other way around: survival is for the sake of love? Why couldn’t we even understand it this way in the case of self love? We do not love for the sake of survival, but survive for the sake of love. Perhaps this is one way to explain suicide, e.g.?
Have you seen Transcendent Man? It’s a documentary on Ray Kurzweil, and death is powerfully present in it from the opening. Death is no longer something to be resigned to; such resignation is no longer the only rational approach to death, according to Kurzweil. Death will be overcome through scientific progress. Thanks to the exponential growth of various technologies, we will conquer death in the near future. We read about advances in this direction all the time.
The reason I like Kurzweil so well is that he looks at the big picture of this development. But there are problems, it seems to me. He views technological advance as an extension of biological advance, i.e., evolution. He takes as given the pinnacle of biological evolution, human intelligence as a tool that aids our survival. Right now, human intelligence as a tool that has evolved for the sake of human survival, directs ALL technological advance toward the goal it has served since it emerged hundreds of thousands of years ago. It’s older than that, even. Survival is THE biological goal, and human intelligence is but a tool that has proven extraordinarily useful in the pursuit of this goal. Yes, I understand that computers now assist us enormously (and have for some time, and increasingly so) in all areas of technology creation. But they are no more than an extension of the original tool and model, human intelligence. This in no way replaces the goal for the sake of which we apply EITHER tool. The end for the sake of which we create technology creating technology remains the ancient goal of biological existence, survival, our survival, and even our individual survival. Will this change? In fictional dialogues with post singularity individuals, it is clear that the morality the developed for the sake of biological evolution somehow remains operative. But how can this be if it is divorced from the needs for the sake of which it developed?
If human intelligence emerged because of this need, and serves this need, how will an artificial intelligence emerge without need? Or what will this need be? Will we have to build the need as well? Will we try to engineer AI that feels pleasure and pain? That fears for its survival? That loves? If AI has no such motives, what will drive it? Right now, the most that Kurzweil envisions are machines that understand and respond to human emotions, i.e., that will be able to detect emotions, with whatever array of physiology reading sensors, and responding in a way we’ve programmed as appropriate given the emotion it detects. But I want to know if these machines will FEEL emotions? And if they don’t, how can they really be said to understand them?
by Dan Foley
From the article:
“Creative machines invent their own self-generated tasks to achieve wow-effects by figuring out how the world works and what can be done within it. Currently, we just have little case studies. But in a few decades, such machines will have more computational power than human brains.”
But my question is, will creative machines experience “wow effects” AS wow effects? Humans experience wow effects AS wow effects, which is WHY we are able to detect and create them. Will creative machines not require “more computational power than human brains” to get to wow effects, precisely because for these machines there are NO wow effects as we experience them?
A similar question presents itself regarding this statement:
“Now you say: OK, computers will be faster than brains, but they lack the general problem-solving software of humans, who apparently can learn to solve all kinds of problems!”
I would say the question is NOT about the problem-solving “software,” or ability. The question is about experiencing a problem AS a problem. Humans have problems, and machines help us solve them. But will machines ever have their OWN problems? Problems as we experience them. Problems that grow out needs, needs tied to biology. Will machines have needs? That is the question. Not whether machines will ever be capable of meeting needs that needy creatures experience, for problem solving or creativity. What will machines NEED? Intelligence grows out of need, and does not equal computation.
http://wfnt.com/singularity-and-the-new-600-man/
by Weather Man
All of this sounds great and has probabilistic outcomes, but the author does mention global warming and no army of robot can necessarily reverse the non-linear climate feedback effects that are already entrenched and will play out in the decades if not centuries ahead (i.e., let’s see an army of robots flock to the north and south poles and create artificial snow and ice to replace what’s lost before moving out to the solar system)!
by Simple
Jurgen’s theories give no consideration to the geo-political forces at work in our world. The few people in this world who control the rich governments will use break-through AI and super-intelligent robots to control other countries and peoples. The history of military disruptive technologies has shown this.
by Spikosauropod
Global Warming? That old bromide?
Just today, Jane Goodall was interviewed in New York and observed that the 90 degree temperatures were the result of Global Warming. Such sophistry highlights the true nature of Global Warming politics.
by Peter Simmons
Exactly! Dream of the stars but watch out for the flood…
by Spikosauropod
“We don’t want to live in paradise. Paradise is boring. There is no life there… there is life in strife. Continuous overcoming of difficulties.”
That sounds like a line from The Matrix Revolutions. I would like to try it the other way for a while.
by sashamilo
Well the fact that it sounds like a line from the Matrix to you is not relevant to the merit of the idea.
by Spikosauropod
All the same, I would like to try it the other way for a while.
I have never lived in paradise. Maybe I will like having whatever I want and never being sick or afraid.
by sashamilo
How can we praise someone that thinks that “humanity is a stepping stone on the path of the universe”? I think it is monstrous, inhuman and immoral. Treating humanity as a means to an imagined and ludicrous destiny, thought up by a man who is plagued by his own inadequacies and inability to find peace. It can only lead to destruction. There is no wisdom in this man. What is this striving to create machines that are smarter than humans? For what purpose exactly? To gain more knowledge? To solve our problems? Humans don’t find knowledge meaningful, we find the pursuit of knowledge meaningful. We find meaning in the solving of problems. We don’t want to live in paradise. Paradise is boring. There is no life there… there is life in strife. Continuous overcoming of difficulties. If you remove difficulties, you remove any meaning from life.
by Heikos
I think everybody wants to live in a paradise. When we understand our brains and are able to change them into anything we want, paradise won’t be boring.
Getting bored is quite probably some kind of self-protection during the evolution. Boredom -> Craving for new things -> Discovering new things -> Increased chances of survival.
But this function may very well be unnecessary in the future and therefore this functionality may be removed.
Can’t find a way to happiness? Happiness is in the brain. The brain can be altered. Therefore you literally build your own happiness.
by Peter Simmons
‘When we understand our brains and are able to change them into anything we want’ – how about a sticky treacle pudding? Do you read through what you just wrote?
by mike
……………….the only answer to your questions why…..is………..because…….
by Logic
This is a great talk. He’s missing the simple fact, though, as others have said here, that if we can build these machines, there’s no logical reason we can’t integrate with these machines as well.
The fundamental mistake most scientists make is to ignore the interruptive power of consciousness. Sentience causes self-preservation instinct in ever wider ways. The very fact that you can perceive yourself gives you the ability to control your direction, and ensure your longevity, if you choose. Just because our “offspring” (the machines) will have a greater capacity for doing this, too, doesn’t negate our ability to do it. As sentient, self-aware beings, intelligent AI will have morality and “sense of life” as we do. We want to live and explore as long as we can, and that won’t change just because our “kids” are smarter than us. Also, they won’t wipe us out just because we’re “less capable” than them.
I imagine our future looks a lot like the lives of our pets, but with an infinite playground. We’ll be taken care of by our stronger-faster-smarter AI, and left free to play as we like. Or, if we choose to, we’ll integrate with the technology we have created and remain peers to our heirs.
by Spikosauropod
I mostly agree with you, except I don’t think our machines will necessarily have any urge toward self-preservation.
The human urge for self-preservation comes from billions of years of struggling to survive and reproduce. It does not come from being self-aware. We are so accustomed to this urge that we cannot separate it from our conscious experience. We view it as being a priori.
Machines need not have this urge. They could be completely altruistic. We could design them to be smarter than us, but act only on our human needs.
by Carl
I think you’re kind of right, but the thing about an urge to self-preservation is those who have it tend to outlive those who don’t. Over time this builds into what we have. There’s no reason that wouldn’t apply to machines, all it takes is a large number of iterations and a little bit of pseudo-randomness.
by Jorge
Maybe, you are right. But, what if the more advanced intelligence decides that you are just a nuisance and a danger to this world?
by Swee
It’s true. Being highly intelligent, I can tell you I am not all that smart. I am a human, and most of what I do is not even consciously directed. I am creative yes, but limited to what I am exposed to — I can only dream up new things using the information available to me in my brain. Artificial intelligence has no such limitations. Humans, cute and cuddly as we can be, are very overrated.
by Peter Simmons
Nonsense. If your dreams are limited by what you know, you have a very boring dreamlife. Even if you are highly intelligent. AI has no such limitations? Says who?
by dan
I think it’s a nice article, but also highlights how even ‘experts’ can be blindsided by a linear view of time; the complexity of growing technology on every area of life makes future prediction too hard for a human.
Case in point ”an emerging robot civilization, which presumably will spread throughout the solar system and beyond (space is hostile to humans but nice to robots)” – today. The civilization that can build a space faring robot smarter than a human can also probably begin to engineer a biological equivalent to MAKE a human space-capable. Or perhaps better craft that will allow us to travel safely in them, or perhaps we will have found new ways to travel entirely. Etc.
Everyone speaks about a specific technology as if it were developed alone, but if we can build space faring robots then we haven’t just supplanted our current lives into the same exact future but with robots; every other technology has also improved and changed, the way people live will have changed, how society and the world is organized, run, our goals and dreams etc.
by Ryan
I don’t see us being supplanted by technology so much as having our technology integrated into us as it advances. Will we have super powerful AI? Potentially. We may also be able to add components to our own minds that make each of us capable in that same way as well. We may even augment ourselves to the point that current Humanity, were we able to view the future accurately, wouldn’t see us as Human. Then again were we able to go back into the past with our toys the people then would view us as gods or demons so its rather relative. Heck we might even become Transformers. In that eventuality I am formally laying claim to the name Optimus Prime for myself.
by Spikosauropod
What Jürgen Schmidhuber has done, in effect, is to reduce all of creativity and science to a matter of data compression. This is obvious, if you think about it. I suspect, however, that few people will get what he is saying.
When viewed in this way, AI becomes a relatively simple problem. I can see a clear path from here to his Omega Point. Curiously, it may involve very little gain in understanding of how the human brain processes information.
by Spikosauropod
Schmidhuber’s theory of intelligence may come to be regarded as one of those embarrassingly simple but astonishingly brilliant theories like evolution by natural selection that changes everything. Ironically, his theory may also come to be regarded as the last such theory that any unaided human advanced.
by Peter Simmons
Or it could turn out to be the last truly silly idea he ever had. Embarrassingly simplistic I would say.
by Gert
Sure it must be great to develop technology that will out run human kind but what if it turns against us.
Robots or A.I’s that’s smarter that us is really not the why i think of the future. its clear that we have seen seriesis that provide us with the information that the robotics will take over and some of you people think that that’s a good idea well i don’t think so we have enough potential and brain power to do more that a robot can but this is just what i think of this situation.