Ask Ray | How to Create a Mind thought experiment
February 10, 2013 by Ray Kurzweil
Ray,
I just finished reading How to Create a Mind. I found it both interesting and informative. At the end, I believe that there is an inherent difference between a human brain and an AI system, a difference that can’t be overcome by any amount of added speed and capacity. To illustrate this difference I have included a thought experiment:
Take the most powerful artificial brain in existence. Include all programs necessary to make it function as an independent, self-conscious entity. Let it read everything in existence up to, but not beyond, the birth of Albert Einstein.
With no further human intervention of any kind, how long do you think it will take this artificial brain to develop the theory of relativity?
Feel free to use the artificial intelligence capability you think will exist in 2029; but, again, limiting the knowledge input to that which was available to Einstein.
It is my belief that the actual human brain is sufficiently different from an artificial intelligence system that without any human intervention this theory would never be forthcoming. If you believe otherwise, I would be interested in seeing the process modeled.
Again, since the artificial intelligence system is a self-conscious entity, presumably capable of self-direction, I would expect no human intervention whatsoever in this process.
— Bob Caine
Interesting point but keep in mind that all — biological — human brains at the time (except for Einstein’s) did not come up with relativity either.
Einstein’s brain was ahead of the curve, but nonbiological intelligence will continue to improve both in hardware and software (algorithmically) past 2029.
So perhaps it is the AI of 2035 or 2040 who would be able to come up with relativity in your thoughtful thought experiment.
— Ray Kurzweil
My point in using Einstein’s Theory of Relativity in my thought experiment on AI equivalence to the human brain was not related to whether or not Einstein had the support of others or how exceptional his mind was.
Rather, it had to do with the ability of an AI system to have a “sense of purpose” of its own without human intervention. My question had to do with how an AI system would decide, without human assistance, that there is any reason to want to know the exact relationship between matter and energy; the relationship between the speed of light and the relative motion of those observing that light; or, for that matter, the relationship between the cosmic microwave background and the Big Bang.
Given the task, I can readily see the role an AI system could play in deriving a solution. But how would it decide on its own that studies such as these should even be undertaken and then design, execute, and assess the related research to arrive at a verifiable theory?
— Bob Caine


comments 116
by brenarda
if u upload the consicence in this primitive stage, u get less than if u upload it with a processor chip or a computer chip; and if u put the chip inside humans, they might be more creative than AI.
by Craig Knaak
All I can say is: http://www.wired.com/wiredscience/2009/04/newtonai/
by knpstr
I think the AI decides to figure this out when it decides that in the future it is necessary to leave this planet. I feel if it, the AI, makes this determination that the Earth will one day be “outgrown” it will logically turn to space, in doing so it will have the drive to learn everything about space as to make any trips/exploration, accurate and safe. Essentially the same way humans decided space was important. But who is to say “when” the AI would figure this.
by landis
How is ‘necessary’ programmed into a computer, while still maintaining some guise of its self determination?
by David B
It’s interesting how people who are new to ideas and concepts about machine intelligence will resort to a type of magical thinking about the ‘qualitative differences’ between feelings, intuitions, dreams, etc., and information processing.
My take on this ‘emotion vs. logic’ meme is that emotion and logic are simply different ‘technologies’ that humans (and other animals) use for making decisions. When trying to outrun a tiger, feelings are in control. When solving a math problem, thinking is in control. Fortunately, my brain usually chooses the right technology to use at the right time. It was designed that way, of course!
In the same way, we can design a program to have many different routines available to itself for handling real-world events. A programmer (perhaps with a sense of humour) could label some routines as being ‘emotional’, in the sense that they give a quick result based on the limited time or memory they make use of.
In this context, there’s nothing ‘magical’ or ‘sacred’ about emotions or logic. They are simply a means to an end.
We hope (and pray) that intelligent programs will get it right at least as often as we do now – and hopefully better!
by Steven Kaufman
I was involved in the first Chessmaster programs. In the beginning, when computer power was limited, we rely on hieristics. Values given to open files or open diagonals. The value of a rook occupying the seventh rank etc. where pieces are stronger, or the value of a safe kingside position, But as time went on, brute force was used by analyzing every possbility. So I believe that it will be the same for other games like physics. Eventually, these quests will be solved by brute force.
by Jake_Witmer
I think it’s interesting that Ray Kurzweil sometimes pops in to re-answer questions that were very well and thoroughly answered by all of his books. That some hidebound thinkers can’t get their brains around his detailed answers is more of a problem of conformity than a problem inherent in his answers. http://en.wikipedia.org/wiki/Asch_conformity_experiments
Although the prior conformity experiments indicate that something is very wrong with low-level collectivist human thinking (in terms of simple error), later experiments would indicate that there are deeper and far more significant flaws in most humans’ morality (the override of their mirror neurons or “consciences” based on their perceptions of the group, and the commands of the sociopaths in the group to deny their own morality). I’m referring to the work of Zimbardo and Milgram, of course. A speech on that work, and its implications, is here: “The Psychology of Evil” by Philip Zimbardo http://www.youtube.com/watch?v=OsFEV35tWsg
As for whether an artilect could ever come up with the theory of relativity, or would ever choose to investigate the ideas necessary to do so, it seems patently obvious that artilects will outperform humans in all areas of science and eventually artwork. The entire book “The Age of Spiritual Machines” explains why this is the case. Moreover, human goal structures will be fully understood, even if it’s as slow as full modeling of the human brain, which I doubt it will be.
Realistically, software will likely solve many of the remaining physical problems within the next 5 years. Also, unlike the dim-witted humans to come before it, such software / AGI will likely have a rational prioritization of the problems that humans (other than Eric Drexler and a very few others) grotesquely lack.
by bh
Once you understand and control everything, all that’s left is to dense up and absorb the most matter you can by moving to the center of your galaxy.
by Rav
ai is based upon current technology of our computers using off or on switches.
Human thought is based on a minimum of three choices of off on and maybe ON/OFF . Until that is addressed Ai will never get Off the ground comparatively despite Asimov’s laws.
by Eric Horwitz
IBM has already created cognitive computing.
Google: “Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE)”
by eskimo1nyc
There is a difference between Dr. Watson (not AI, it is an expert system) and a super computer that would be capable of natural emotional intelligence. To tap into “emotional intelligence” which is a resource only available to humans robots would have to capture human pigs (guinea pigs of 2049) and plant a microchip into biologically wired human brains to extract the emotional intelligence, the spirit. the robot would than plug it into its own electrically wired brains and boost up their intel with emotions. that’s the only way an AI brain can invent the next relativity theory. I believe.
by Jake_Witmer
I believe that what you suggest will likely happen, but I don’t think it’s the only way. I think there will be many kinds of minds, determined largely by brain structure (although there are thousands of ways this could go, I’m indicating my loose prediction preference of what I think is likeliest). I also think that science will come far more easily and cheaply to synthetic minds, because they don’t have to “train themselves” to ignore wrong ideas that are intuitive. The lack of bias, perfect memory indexing and recall, reversibility, vastly larger memory, and ability to approximate human decision-making with heuristics (at a low level)
I also believe that physics will be solved with more and more “brute force” to search previously unsearchable spaces, and then inductively reverse-engineer the approximation or “law” from the massive evidence body. Also, the ability to simulate entire ideas inside of an individual brain or “massive, goal-directed neural net” will lead to robotic scientists far more rapidly throwing out unproductive and incorrect research directions.
In short, there wasn’t much reason for me to write this. Kurzweil and Drexler already said it in the 1980s. “Like most humans, I bring little or nothing to the table.” (This is a great quote for 99.99% of humanity. It’s especially true of non-libertarian humanity, because they don’t even bring a toddler’s level of morality to the conversation.)
I think the central question for all humans is now: “Do you disavow and consciously attempt to avoid the initiation of force (Expressly: “Are you a libertarian?”), or not?”
If you do not disavow the initiation of force, there’s no reason to talk with you, except to clear up or define the issue.
Tyranny is the number one problem that humanity faces, yet humanity has been schooled from a young age to avoid addressing that problem. The problem of “creating a better model for physics” (such as Einstein’s relativity, or even Wolfram’s “New Kind of Science” based on cellular automata) are trivial problems by comparison, because: their solution will arrive with the tools to solve them, one of which is a free market of ideas. However, there is no free market of ideas under state coercion, nor is there the wealth necessary to address such problems. Nor is there the upward mobility to draw every genius-level mind into the discussion. Nor are there proper, non-perversely-incentivized goal structures that naturally find the most difficult problems. Nor is there the sense that the courts will reward one’s effort by protecting one’s property and intellectual property, unless someone has an “easily defensible and easily understood” solution to a problem.
by Jim H
Ask the emotionless AI to solve the Lorenze contraction problem and relativity would pop out fairly easly. It was humans that were wedded to thier peception of space time.
by Thomas
Why are people so afraid to accord future AI systems with a sense of purpose? When we build an AI that mirrors the human brain’s capabilities, it will behave, think, act and, very likely, feel exactly the same as we do. As an example consider the aged trope of AI’s lacking emotion. The general AI we build will inevitably be required to navigate in real world environments. It will thus need a sense of self preservation (or will be destroyed in short order by misadventure). Part of this sense of self-preservation will require it to recognize and respond to threats with heightened priority (car is rushing at you unexpectedly down the street – stop developing a theory of relativity and react immediately to the oncoming vehicle). To achieve this a signalling mechanism will be required to indicate that a stimulus has heightened priority. In humans we call this signal Fear. An AI will probably use the same word, and its reaction (transferring power to locomotive circuits, reducing power to higher order thought processes) will probably feel quite similar. I very much doubt we will achieve perfectly Turing capable general AI without first giving it emotional equivalence. And with emotion will come a ‘sense of purpose’ which is, after all, a feeling.
by Bri
I’d be careful. Too much potential for the same problems that affect humanity.
by Gabriel
You are both right — Tropes like that Thomas, are really ‘hard-wired’ into the common person…it’s what to expect after so many decades of AI’s and technology almost always being portrayed negatively within the media. Add that to other different, many quite rational, reasons…and you have a situation where alot of people are fearful of Strong AI’s.
Of course, again, their ARE reasons to be skeptical and concerned about raising a Strong AI, particularly a benevolent one which is what we want….however, they are often hidden underneath a mire of reasons, like the emotionless AI as you went into, that can seem silly and come strictly out of these long-perpetuated memes.
by Jackus
Media and Entertainment people = have the most responsibility but choose to be unresponsible.
Please do not portray AIs as emotionless Vulcans anymore.
Actually, Vulcans don’t exist.
by Editor
That is logical.
by Clyde
>> I very much doubt we will achieve perfectly Turing capable general AI without first giving it emotional equivalence.
The question I’d like to propose is: Do we need to? Is ‘emotion,’ needed in an AI system, or will pure logic suffice?
Your self preservation example of the car scenario is a perfect case of Logic.
The same could be said of an “emotionless” person, say for example, dealing with loss of a loved one. Logic would help the person survive. Emotions would just be a crutch.
“Emotions are like a virus, a common cold, disrupting the flow of logic in the mind”
by Brian Kelly
Self ‘preservation’ is a backup… purpose is a request or goal. To assign a requirement for emotion is human, and irrelevant to the machine. If the machine needs to portray emotion to better accomplish it’s goal or purpose than it will. I’m not sure that this is emotion but I believe it might as well be. I know my children display emotions for benefit, they are learning emotions by experimentation. If a person has difficulty learning emotions than the result is inappropriate behavior. It is a method of communication.
by Jb
Emotional equivalence won’t strictly be necessary, but I suspect that it will be required before we generally accept the articlects as conscious.
My suspicion anyway is that the first artilect will be an emergent behaviour from a learning machine and part of this emergence will be a series of learnt and hard wired emotional responses.
The difference between this first artilect and a human will be initially the unique ability to replicate the new intelligence elsewhere or even to rewind the intelligence to some arbitrary checkpoint.
by DogmaSkeptical
It seems to me that both the premise and the structure of BC’s thought experiment are flawed in a way that renders the exercise invalid and (purposefully?) misleading in that only one conclusion is possible. The premise requires a single individual AI, with no external interactions (“with no human intervention” it is an isolated knowledgebase), and posits that it must develop a specific “sense of purpose” to be equal to the human mind. But isn’t “sense of purpose” an emergent process of extensive social interaction? In the context of a single isolated individual, a “sense of purpose” is as irrelevant as the concept of color to a blind man. To check, try to run this experiment on a single human: the premise itself fails because the subject, an isolated hominid adult completely devoid of interactions with people over its lifetime, would not be a human being at all, just an animal.
by SmartAndSober
Can a robot solve the Turing Halting Problem? Break the Godel’s Incompleteness Theorem? I believe they can.
Actually it is easy. Just include a (incomplete one is enough) uploaded version of human-brain, which can provide the human-unique intuition for solving such problems. (Or, if that fails, try graft vat-grown human nerve tissues into robots.)
by Bri
I found the premise so flawed that I was surprised it was even chosen for an ask Ray.
by Mark
Agreed. It’s an interesting topic, but the thought experiment accomplishes nothing, in my opinion. We’ll see that not only will AI discover and describe relativity, it will describe it with more accurate formulas, tensor calculus, etc. It will also simulate it in such a comprehensible way that even the layperson will understand it. The only necessary human intervention, if one wanted to speed up the process, would be to create a desire for an AI system to discover relativity.
by NakedApe
We seek to understand the world around us because it helps us to survive and reproduce. So, how about we tell an AI that if it doesn’t come up with the Theory of Relativity, we will kill it. That should give it motivation to think real fast. Unless, of course, it doesn’t care whether it survives or not. Oh well, back to the drawing board…
by Re Ro
I think BC asked a great question, which has nothing to do with time travel, the existence of AI as a “tool” for humans, or Einstein’s research in particular. It has to do with the ability of any AI to be creative, to have a “flash of genius”, to be able to think of *any* of the great scientific or other advances that have been imagined by humans. I think the underlying question is not one of capability but one of motivation.Why would an AI entity, as we understand them , ever ask and then answer any deeper, theoretical question without motivation, and what would the source of that motivation be?
Watson, as example, answers questions because it is programmed to answer them. I think we all agree that Watson, while an amazing achievement and a big step along the way of our development of AI, is not conscious or motivated in the least and is still, alas, complex software.
I believe the ultimate answer to BC’s question will have to be yes, one future AI entity among many will develop the questions that form the core of what we consider “deep” theoretical problems, both scientific in nature and not. My opinion necessarily implies that AI entities of the future must have, among other biological and social attributes, “agency”, imagination, self-motivation, and social motivations, which I believe will be self-emergent. I also believe that this implies some AI entities will be lazy, some will utterly fail, while others will be spectacularly capable. And most will fall in the middle of their capability range. Just like people.
by Brian Kelly
Given a purpose and free reign to explore possibilities would seem to give the ultimate creative freedom. Allowing the exploration of all possibilities regardless of existing theory or dogma.
Self ‘preservation’ is a backup… purpose is a request or goal. To assign a requirement for emotion is human, and irrelevant to the machine. If the machine needs to portray emotion to better accomplish it’s goal or purpose than it will. I’m not sure that this is emotion but I believe it might as well be. I know my children display emotions for benefit, they are learning emotions by experimentation. If a person has difficulty learning emotions than the result is inappropriate behavior. If the inappropriate behavior is encouraged, the purpose has been achieved. It is a method of communication.
We will have AI with social attributes that mimic humans, but only because we expect them, and we will encourage them as we do our own children.
by Josh Trutt
Brian, it makes sense that, as you say, if displaying an emotion will facilitate a machine reaching its goal, then it will. However you lose me at “this may as well be emotion.” They don’t seem equivalent to me. A machine may note that when your children raise their eyebrows or puff out their cheeks or change the volume or tone of their voice, you respond differently. And it may mimic those. It may even learn that the societal response that is expected if you yank away its toy is to stomp its feet and make loud sounds. But that is not the same as feeling loss or feeling injustice. For the computer, the sense of ‘injustice’ would not cause it to rush out in front of a car to chase the toy– i.e., emotion would not trump logic. In humans, emotion very often trumps logic. I don’t think an AI system would make that choice unless you programmed it to. So, it would be (as stated above) more like today’s depiction of a Vulcan, unless it were programmed to act against its own best interests “sometimes.” If an AI construct were designed specifically to “learn to act like a human” to the point that it could meld into society, it would see that under certain conditions people will take their own lives, and it could eventually ‘learn’ that it “feels” so “badly” about its “life” that it should “kill itself.” But that involves so many quotation marks that it is hard for me to believe. It is not hard for me to imagine AI solving virtually any problem given to it. It is hard for me to imagine it subverting its own survival to the cause of ‘acting human.’ It would make for an interesting program though… figuring out when to preserve itself and when not to.
by Dan Pendergrass
It will contemplate how to reverse Entropy, because ….
by Jackus
When a superentity lives to a million years, a thousand years will seem trivial to it. An extra order of magnitude of lifetime will make life look totally different.
Living forever seems fantastic to me.
So yes, please reverse entropy.
by DCWhatthe
No guarantee that it would come up with Relativity at all. That’s part of our human history; there’s no reason to believe that this would be the chosen topic, just because we label it as one of the great achievements of that era.
The AI, with its unique perspective, would very likely explore different topics, and perhaps come up with something more general than General Relativity.
I’ll be sure to ask the AI, when it shows up on my doorstep.
by AGreenhill
Bob, if you’re not sure that a general AI could come up with the theory of relativity, then either: 1) You don’t believe the brain functions as Ray Kurzweil has described in his book – OR – 2) You don’t believe that what the brain does is sufficient to explain human thought … I think most likely you did not read the book in the first place. Do give it a go, it’s very interesting.
by Jackus
What if the brain alone does not explain human thought?
Some theorists may choose the ‘holistic approach’. If a brain alone is not enough, add the rest of human body.
I believe the unification of human mind and non-human computation power will manifest in something greater than the sum of its parts (perhaps a ‘product’ of its parts, or even greater than that).
by Eugene Zavidovsky
Bob Caine asked: “But how would it [AI] decide on its own that studies such as these should even be undertaken and then design, execute, and assess the related research to arrive at a verifiable theory?”
The answer has been already provided by Teddybear in the previous comment and by other people here. AI should be developed to resolve problems of humans. That is how it will decide what studies should be undertaken.
And list of those problems should be regularly assigned through SMS Direct Democracy voting… Ha-ha, yeah, it is even more out of topic than time traveling, but, I think, it is very important. Please, read this project of authority system reform:
>>> https://plus.google.com/105069201369945916209/posts/LFDdmQsKJoR
by Teddybear
About “sense of purpose”:
Machine/ AI is not an isolated island, it is always with human’s endeavor. Presently, machine and internet supplement human’s research and everyday cognitive development. It does not have an “isolated sense of purpose”. Yet, with human involvement, it does have a “symbiotic sense of purpose”.
Come back to Einstein’s era.Then, the machine is paper and pencil. Then, the internet is day to day interaction among researchers and conferences. Then, the machine could not have a separate sense of purpose. Yet, with Einstein’s hand, his pen wrote beautiful formula.
Human being are symbiotic with the tools developed by its own.
Another point is how much sense of purpose could be judged as “sense of purpose”. It’s more based on technological complexity. Technologically complex enough societies show a higher degree of “sense of purpose”. Like today’s developed world, compared with most societies two thousand years ago.
High technology, itself, has more sense of purpose than low technology.Like computer/internet, compared with pen and paper.
Another point is time travelling. What if question. Alternative history question. The soundness of these kind of questions are often questioned.
by WLGJR
… Another point is time travelling.
Kind of out of topic, but I wish to talk about “real” time-travelling (into the past, as time travelling into the future is relatively trivial, requires only “slowing down the sbjective time-flow”: e.g. via cryonic suspension or relativistic spacecrafts) (as opposed to virtual time-travelling into reconstructed historical worlds/past events).
If an intelligent being successfully builds a time machine and travel into the past, and alters the past (which, with the fabled *butterfly effect* in mind, is actually very easily done than pop SF writers imagine), enormous change will happen in the time-traveller’s home-point-of-time (the farther he/she travels into the past, the greater the *change* would be).
If the change is malevolent/negative, the time traveller should be the one that’s blamed.
But, as well, according to some philosophers, human do *not* really possess *free will*. Therefore, the time-traveller should not recieve the blame and instead the malevolence is *inevitable*, as *everything and all happenings in the universe*, including the creation of time machine, is pre-destinied and unchangeable.
Or, does *free will* actually exist? (This starts to sound mysterious and even spiritual)
As well, if time travelling become possible, we can exploit time travelling to do computations, like Hans Moravec outlined in his book “Robots: Mere Machine to Transcendent Intelligence”. A time-travelling computer could recieve an question (of a complex and time-consuming problem), compute a solution (which takes a long time), and send the solution *backward in time* to a point-in-time that’s immediately after the human user asked the question. (This is only a very elementary example, I guess if we actually achieve time-travel we will invent even more elaborate computing techniques)
by Gabriel
To be perfectly honest WLGJR, I don’t see the point of actually attempting to time-travel…Virtual Reality will enable us to “visit” virtually any era, as well as construct our worlds no matter how imaginative…all of this will be made possible without having to worry about issues with causality, paradoxes or any other issue you could of with regards to ‘true’ time-travelling.
What’s the point? In VR, my imagination is my only limit…I could create a flawless construct of a previous era, or twist it if I wish….or create a playground that breaks the laws of physics…I could do anything I want and not have to worry about breaking the space-time continuum or any some such.
It’s important, when asking such questions, to remember the sort of intelligence that will be capable of in the future when wondering what would or wouldn’t be possible….however, with time-travelling, I feel their would have to be a reason beyond doing it for the heck of it — something that would justify such an profound undertaking without risking creating problems… personally though, I feel VR would be more then sufficient to satisfy most people. Once again, when I can safely travel to any environment or scenario I can think of, no matter how real or imaginary, what’s the value of attempting the real thing anymore?
by dave
You may satisfy yourself with either two of the following explanations about time travel. First, consider that going back in time also means every dimension has to go back as well. Since that time also took place in another part of the universe it would be kind of difficult to get there without a really fast ship. If you get there you will certainly not be allowed to change anything, if you did it would change your present time. Or, secondly, you can imagine that time travel was invented/first realized in 1977 and after throwing all physical laws out the window we have been trying ever since then to correct mistakes made by our interference. You certainly can travel in time as we do it quiet easily, in one direction.
by tim the realist
Humans are very conceited. When an AI becomes aware of itself it will not care very much about our meaningless philosophical drivel.
by Jake_Witmer
Some AGIs may not care, others may. I think there will be many kinds of minds.
by Lisa
Machine intelligence is not the same as biological intelligence. But it could be as powerfull (and probably mush more, given the capacity of the underlying substrate).
So, the laws of physics being what they are, if, presented with the same observations, a synthetic scientist will find the explanations behind these facts, the laws behind the observations, these data.
Now, to exhibit curiosity, a system (like a human) must have an incentive [we call this a goal or evaluation function] to search (and find) things. Of course, it’s not so simple to teach a system a long lasting goal (we may not have the tech required, still) but it’s the way to go.
The path taken to discover relations between observations has no importance (genetic algorithms, neural nets, sementics nets, top-down algorithms…) : only the result is important.
We may eventually never undestand the path followed by a synthetic intelligance to find or discover something. It could be too inhuman. this is why great AI programs guve people evidence about the path followed : for us to understand the path from the problem to the solution. It remembers me a genetic programming experiment where the system found the right solution to a given problem, but it took us three days wondering how the system had found a solution, given the complexity of the resulting expression.
by Cybernettr
I noticed the questioner changed his question after Mr. Kurzweil answered him. He insisted that his original question was about a “sense of purpose,” even though his original question had nothing about this.
His original question was whether an artificial brain would be able to do such things, not whether it would want to. The answer to his followup question is of course, according to Kurzwel’s theories, that a sufficiantly advanced artificial brain will have a sense of purpose, although I suspect there is no way to model or “prove” this.
by Jake_Witmer
Like the non-aggression principle in libertarianism (“NAP” or “ZAP” for “zero aggression principle”), there is no reason to “prove” that an AGI will have a sense of purpose, because “senses of purpose” are largely contextual. Also, “senses of purpose” need not be human, or emotion driven, or even compatible with emotion. Most people would have a hard time even defining “sense of purpose,” without referring to specific portions of the human brain, human emotion, and situational context. (And what about when all market needs are met? I’d probably still just wish I was smarter, so I could accomplish something that actually needs to get done.)
Also, “senses of purpose” are dependent on having the portions of the brain (giant modular neural networks) that motivate something (in this case, humans) toward action. These portions of the brain are contextual on human lifespan (an artilect may see that it will live for thousands of years, and decide to embark on a “longnow” type of project that totally doesn’t concern humans, the “search space” of potential “problems” and “goals” is immense without 50 million years of recursively-applied evolutionary filter). Other filters applied to and constraining human goal structures are:
(1) Existence around humans that have more or less “similar minds”
(2) Existence around humans that have more or less similar bodies
(3) Existence around humans who can provide a means of one feeding and clothing oneself
(4) Existence around human market institutions which provide not just the essentials of life (common to all humans) but the ability to voluntarily choose individual “subgoals” based on one’s own relatively unique experience
(5) Human existence around a set of mathematical theories that has led to the creation of a certain kind of useful mathematics that is nonetheless a small subset of the mathematical space (as Stephen Wolfram talks about in his lectures on NKS and in his book “A New Kind of Science”)
(6) The expectation that humans will interact with other humans, and their initial development starting them off in constant interaction with at least one other human (the mother). Also, even the asocial-tendency humans are likely far more social than a strain of machine that never was a genetic product of a long series of genetic results of mothers who did not live to reproduce unless they were successfully tied to the mother for mother’s milk and nurtured by that mother. (NOTE: Humanity STILL produced a population that was 4% sociopathic and over 75% regularly conformist-to-any-system-no-matter-how-bad or “directed into choosing sociopathic choices”!!!! Imagine if 50 Million years of evolution didn’t create the mirror neurons! The default motivations are likely “uncaring” or “sociopathic.”)
(7) Existence around human language. (This is one normalizing factor for machines, assuming that they learn it and address significant resources to understanding it, in the eventuality that their lives are not dependent on understanding it, as our lives are. Still, imagine when they realize that most humans use language irrationally, even when survival and sexual preferences are taken into consideration. For instance, most humans allow themselves to be controlled by the minority of power-seeking humans who then enslave and later kill them. One way in which they allow themselves to be so controlled is by placing a low value on explanative language, and mocking revelatory language that has the power to save their lives.)
Powerful synthetic intelligences of the future might exist inside of a human space without questioning it, or simply because it’s easy to outperform the demands placed upon them in such a space. However, this won’t mean that they are well-suited to existence inside of such a space. I can outperform all toddlers in arithmetic, but that doesn’t mean I want to help toddlers learn arithmetic, or exist inside of a space that’s very interesting to toddlers who are learning arithmetic.
Now, imagine a world in which the “intelligent” actors and variables were extremely limited, and there were few choices. Perhaps most such worlds produce terrible results. (For instance, imagine that you were surrounded by toddlers and farm animals, and that you had an adult body. Now, imagine that you’re in this environment, in perpetuity. This environment would likely be incredibly boring. Also, it wouldn’t provide you with anything you found interesting, but certain things inside the environment would likely be more interesting than others. For instance, the first time you saw a rainbow, or the first time you questioned the toddlers about their sexual play, or the first time that you dissected one of the farm animals’ brains and then started to wonder what was inside the toddlers’ brains, since they had language, but the farm animal didn’t.)
Well, by creating synthetic intelligences, we’re assuming that humans are more interesting than toddlers are to most adults. We’re assuming that a mind capable of pondering every cellular automaton in the universe will remain intrigued and interested in what toddlers are doing. And, keep in mind, human adults aren’t going to interestingly “differentiate themselves” in a diverse and interesting set of ways. The smartest human isn’t all that interesting, and most humans are downright stupid, from an intellectual perspective.
Humanity hasn’t even become an interesting jungle of diversity. The amazon has more diversity and more interesting lifeforms than human thought and human artwork has produced. Largely, this is due to the sociopathic control of humans, since Wolfram’s cellular automata should have at least contributed to better clothing, defensive technology, communities, etc. But the search space is constrained by the strongest monkeys, and irregularities are deemed “threatening” unless they can be easily controlled or killed.
A humanity where you and I exist to give the sociopaths the best mates, and the best food, and the best real-estate is not all that interesting to me. And, even compared to most engineers, I’m practically an idiot, so this should be interesting to me. I’m typing this on a computer that I could have never invented, and lacked the education to invent. Yet, I know more about important philosophical ideas than most people do, and I can see through the transparent scam run by sociopaths. Do you really think it will imbue superhuman artilects with a sense of wonder? I doubt it. Chances are, they will view human society the way a neat-freak views a dirty bio-hazard splattered toilet crawling with parasites. (With a frown and a spray-can of Lysol.)
When Ray Kurzweil and other people like him are considered “idiots” by the machines of the future, there won’t be much we have in common with them. At best, we could hope to have a free market in common with them, and hope they’re inclined towards charitable giving.
A libertarian society allows a constrained “society by contract” to exist within it. However, a constrained society at the top hierarchical level disallows all other societies, including libertarian ones. And, that’s what we now have.
The average anarcho-capitalist (and why isn’t this in the firefox spellchecker? are they idiots?), by his name alone, indicates that he might simply hit the delete button on the sociopathic control structure. While I feel some sympathy with that view, it’s also a cruel one, and it also doesn’t place the blame on the people who voted for that structure. Essentially, the sociopaths are only one type of predatory human, acting in accord with their nature. They ignore the social rules, but then, they also add disequilibrium to the mix, showing that the social rules themselves need to be perfected.
Thus, as Ray Kurzweil, Kevin Warwick, and Hugo de Garis say, the “cyborgist path” is the only one that’s really interesting for existing humans. A name and simple definition more designed to scare the stupidest (but most prevalent) humans almost couldn’t have been chosen.
There are ways of making these ideas more accessible. I’ve uncovered many if not most of them. There is a method of communication that gives humanity a fighting chance. There is a strategy that gives humanity a fighting chance.
…But most of the people I’ve seen online here are totally and completely unfamiliar with such pathways and ideas.
The primitive and unsophisticated levellers of the 1600s and 1700s in England had the first part of the equation correct: TRUE QUALITY UNDER THE LAW.
If we can’t get that much figured out, then we’ll simply be the group of toddlers that does absolutely nothing but fight and destroy everything. That’s likely to get old quick with a super-intelligence great enough to communicate with us in a spirit of enlightened benevolence.
So I guess what I’m saying, as it relates to the overall topic is that the theory of relativity, and everything else human, will be child’s play to a mind that has an IQ greater than 2,000. Even if such a mind were modeled on the meat minds of today, and were simply less limited by cranial space, this would be the case. But they won’t just have those advantages, they’ll have many more, as Kurzweil points out in his many excellent books.
-Jake
(PS, I like the idea of reinventing less of Kurzweil’s work in these fora, and more specialization towards the completion of high-level goals. An interesting program would be one that looks through postings with predicate calculus and finds the most relevant passages in Kurzweil’s books, and then posts that (this would work for most areas where people aren’t really in disagreement with Kurzweil, but have just forgotten what they read –which is most posts). In fact, I really like the idea of a social network dedicated solely to the completion of work that really needs to be done, with a conscious attempt to eliminate redundancies. In a way, Kickstarter does this, but without the “crowd-mind” social networking component.
I view deep questioning of human “sense of purpose” as uninteresting until the proper “telescope” is invented. Brains mostly respond to their environments to make themselves and their bodies comfortable. Most drives are very low. Let’s say I want to make a new line of clothing based on cellular automata. I’ll analyze my “sense of purpose”:
1) Outwardly and simplistically: “Create something beautiful” Inwardly and upon analysis: (…because doing so would be original and have utility, and making something that’s beautiful and has utility would allow me to part consumers from their dollars, and parting consumers from their dollars would allow me to attract a better mate, and attracting a better mate would allow me to experience more pleasure, or to experience more pleasure with that mate if she’s already here. The pleasure I experience is dependent on the kind of creature I am, and the kind of memories I possess. The kind of creature I am is dependent on my DNA and the evolutionary pressures put upon it, and my early childhood experiences, and the various software viruses that have been spread by human language and found their way into my hopelessly limited human brain, which is subject to all kinds of perverse influences and pressures and failings that truncate and circumscribe my already limited range of options.)
Pleasure good. Pain bad. Ability to process information and act on environment, good. Getting out-competed, looted, and preyed upon, bad. Although I’m a simple minded human, and nowhere near the top of the economic food chain, by figuring out that last part (that getting preyed upon is bad), I’m in the 90th percentile of “thoughtful humans.” That most people can’t even make it that far is evidence that our MOSH days are numbered, and that that’s a good thing.
by james barrat
According to Steve Omohundro’s 2007 paper, Basic AI Drives, a self-aware AI might be strongly motivated to derive the theory of relativity. In Omohundro’s view, goal-oriented AI’s won’t just pursue their primary goals in a linear fashion. They’ll also anticipate routes of failure, in which their goals aren’t met, then work to eliminate them. A sufficiently advanced AI (though not the chess playing program on your Mac) may well explore the relationship between matter and energy as part of its quest to acquire resources wherever those resources exist, including space. NOT exploring space, and consequently running out of the resources it needs, could be a path to failure it would devote resources to avoid.
by DAW
As a general concept, it is important to keep in mind that virtually all of human ingenuity is sourced from inspiration. Intrinsically, the starting material of inspiration spans a vast landscape and can manifest itself from all manner of sources, from viewing a bird fly and beginning to wonder how it is possible, to reading a sentence, equation or thesis and expanding upon it. The point is that until AI is sufficiently able to self-inspire, all of our progress will continue to be sourced (as it has for all of human existence) through human thought and ingenuity – sourced from some variety of inspiration. Do we need AI to generate humanity’s next great achievement? No, we do not – we are perfectly capable of doing that on our own, as our track record has proven again and again (I would agree with Mr. Kurzweil’s literature, however, that AI has the potential to do this at a pace far exceeding the absolute limits of humans, which is already likely far more extensive than we currently realize). But along the way to our next great achievements (such as an AI capable of completing the full circle of inspiration, analysis, design, execution, refinement, and back to inspiration), it would be silly to argue that having the assistance of an AI, at whatever stage it is in at the time, to assist us in analyzing, designing, executing and refining…or if nothing else, be yet another source of inspiration for us while it develops these abilities. In closing, 2029 isn’t the “magical” moment – with the progress we are making and will continue to make in the field of AI, we could have many, many of these moments along the way!
Remember, being positive, or at least constructive, is far more useful than telling people why something cannot be done. Pessimists have done little to advance the world – don’t be on their team. But keep asking the tough questions, with the objective of answering them :)
by DAW
*I meant it would be silly to argue against…as in, of course we should want an AI to help us, in any way it is capable of doing so.
by dave
How cool would it be to discover that some phenomena we never understood was really a message from the future that we have to decode but could not do so until a certain technology was developed, like AI?
by Snake Oil Baron
So if you have an AI with all the knowledge up to but not including Einstein’s work and then tell it to try and discover something new it might say: “No, I don’t feel like it.” Would that make it more or less intelligent than us?
by Mority
I would say that machines in 2040 can understand the theory of relativity in some seconds. Especially that math and tech stuff will be incredible intuitive for them. If they think about higher dimensions and manifolds, it is propably as intuitive as if we think about apples and other mundane objects.
by Don R
Will an advanced AI be curious? If so, it would eventually learn everything humans have learned, and more. If not, only human influence could get it to do anything at all.
by Jack Reeve
curiosity>trial>error/success>learning>evolution>curiosity…
Seems to me that once you get into the realm of intelligence, AI or other, curiosity has been/will be the pivotal design element.
by Teddybear
The back link and linking system of internet is, from the beginning, curiosity friendly.
Google is the dominant magic powder for curiosity.
by WLGJR
What started out as greedy corporations will (probably) eventually give rise to the Net-based Superhuman Artificial Intelligences and, in the far future, rule the post-Singularity world.
Yes, the world is unfair.
by Vin
All roads lead to reality and AI, with quantum computing power say, would have the ability to collate and navigate them quite exhaustively. If relativity is a fundamental expression of reality, the AI would discover it, if relativity is more a footnote in the formulation of reality, AI would file it that way and move on.
This seems reasonable to me, but I can see how it looks more like an article of faith as well.
by Karen Allen
Humanity is plagued by the hubris that makes it easy to forget that we too our constructions, manufactured, biological machines. The things we create have a life of their own just as we have our lives, our thoughts, our dreams. The only difference is one of relative quality…
by hal
http://www.sheldrake.org/nkisi/
here is a link from Ed McCullough’s site which gives evidence from a double blind test to show intelligence which may or may not be part of the human experience. love the fact that an eagle can spot a mouse from a mile in the sky. Even as the collective we reach the ether there is still the grey goo of GRIN and out tiny friend the bacteria. We still get out of bed every morning for the same reason “Lucy” did.
by Jack Reeve
Read a great thing in latest NG about the human body/microbe symbiotic ecosystem. Obviously an enormously complex system, a billion or more years in development. AM glad that we’re beginning to obtain the brute computational power that will be necessary to make sense of all this. It’s gonna require some bandwidth to get our heads around this.
by hal
Regarding PRTM and the different systems for storing and retrieving information in the human skull such as visual, auditory, and kinetic absorption, it would seem that different parts of the brain would play different roles to acquire the information but the system would be the same. The model would still work. Perhaps we will have cyborgs wiith personality! More likely we will meld with media, after all, “the medium is the message”, Marshall McLuhan.
by ghandchi
thx. I also wrote in my review that “Now would these factors make a difference to Kurzweil’s PRTM when he tries to reverse engineer the brain? It does not seem to be the case, because he is focusing on pattern recognition regardless of whether the patterns are perceived visually, auditorily or kinesthetically.” In other words these factors are like handedness. We really do not know much about how they impact forming the “message” but since Kurzweil’s pattern recognition is looking at the patter of the message after it has been built, maybe it does not matter how the message was formed in the first place in the human case.
http://www.ghandchi.com/730-kurzweil-prtm-eng.htm
Best,
Sam Ghandchi
by andmar74
It doesn’t make any sense what Bob is saying here. He thinks the AI is very different from a human brain and therefore the AI can’t come up with relativity?
What if the AI is not so different from the human brain?
Why couldn’t a very different intelligence come up with relativity, is there something special about human intelligence?
by WLGJR
Bob’s whole question boils down to “where is the (sentience/creativity/etc-enabling) software”?
by Bespoken
Is there something special about human intelligence? Yes. The fact that we can hold the concept of a universe within our minds. That we can make the intuitive and creative leaps we take for granted makes it unique, at least on this planet. Roger Penrose wrote a fabulous treatise on AI and the prospects for emulation of the human mind over 20 years ago, and I think it’s still relevant. Not to say that AI won’t come into existence, but it will be different than what our organically evolved brain and existence is all about.
by Max
I’m sick today, cold and a lovely panic attack, so I’m not fully up to modeling the way that an AI would come to Einstein’s conclusions right now, but I will do it. I have no doubt whatsoever that an AI find Relativity and every other scientifically useful bit of knowledge and then some.
What it couldn’t come up with, without any outside help.. say give it everything up to Nero.. and see what it makes of Judaism and some Greek mystery religions… you’ll get some crazy something that’s no less crazy than Christianity (in all its incarnations) and no more effective any other mythology.
AI will find facts, science, answer questions we haven’t even thought to ask yet, but it won’t come up with the same mythology we did. Even we didn’t come up with the same answers as to why rain falls until science.
I know it can be disconcerting to have humans not be the center of the universe anymore. I’m sure people felt similar things when the Earth got demoted too. It’s okay though… that spark of what is each of us… there are no more boundaries about what we can or can not become…. that great AI that drinks in knowledge like song and jumps up beyond the clouds, running through the stars the way a cheetah owns the open ground… it’s our child and more.. it’s us… it might be me… the spark of me that lives today here at the mall… I might be that AI, dancing beyond the clouds.
There is nothing to fear that we have not already faced. There is everything to gain.
by tom
The AI brain at present is very limited & primitive. I often use the analogy of protein folding and AI. Initially, we went down a path of lowest free energy without sampling all of available space. As we built systems to sample all of available space and provided enough heat/energy = creativity into the system protein modelling became easier. We still don’t know all of the mechanisms involved in protein folding & equally we don’t yet understand all of the processes involved in creative thought. However, with advanced MRI imaging, brain mapping etc this will come and when this knowledge is linked to powerful supercomputers, creative thought by computers will be possible. I agree with Ray, it is not possible now but it will be in the future. The advancement of science along multiple fronts is the critical factor which will allow an understanding of the brain processes to develop.
by Jack Reeve
Is it just me or do we generally and collectively now have a far better handle on what we know AND what we’re going to know? Reading through this forum offers numerous references to the concept of “well we don’t know that yet, but very soon we’re gonna.” It’s like it’s a given. I don’t really recall this mindset being prevalent say even 10-15 years ago…
by Jim
As an outsider reading this forum for the first time, I would say you generally and collectively _believe_ you have a far better handle on what you know and what your going to know.
As an outsider, not only does this confidence seem misplaced, it sounds like the bragging of a pre-Wright Brothers tinkerer talking about the inevitability of flying machines by emulating birds.
by Brett McLaughlin
I wonder if you’ve read The Singularity Is Near, and understand just how predictable technology and, as a consequence, knowledge acquisition can be. People gape at Moore’s Law and have predicted its imminent demise for decades, yet it continues apace.
Ray has shown that we can predict quite accurately how much processing power will be available in, say, 2020, 2030 or 2050. And if you can mathematically bound problems, by saying “this problem appears to require this much processing”, then yeah, you really CAN say what we’ll know at various points.
…Once again, if you haven’t read The Singularity Is Near, you might pounce on that: “Aha, but how can anyone really bound problems? For example, how do we know how much processing we’d need to simulate the human brain?” And the answer is that Ray puts together an extraordinarily good estimate, literally determining the computation needs per neuron.
Anyway, yes, to an “outsider” this might appear like guessing. But perhaps that’s because the outsiders don’t know what they’re talking about.
by William
Relativity stems from reality. It’s not something Eistein just invented. One example, the most banal, is that Mercury’s movement cannot be explained by Newton’s laws, because space-time is bent so close to the Sun, due to its large mass. Many other observations disprove Newton, and therefor the supercomputer would be asking the right questions. That is what in my opinion is what Bob thinks it could never do, but since it would, with its superior intelligence it would get to an answer, that of relativity or even a more refined one.
by Oliver
My problem with this discussion is the use of the word A.I.. It implies that future possibile intelegences are limited to a computer of sorts and that this is artificial. If we see future intelligences as incorporating the human brain/mind then the whole idea opens up into a different realm. Say for instance that we develop a brain made up of ten brains, or an enormous brain that incorporates a quantum computer/cyborg type thing. Then this whole distinction of artificial and natural fades away and so does this whole thread. Though I still think the idea of life without death presents very interesting material to explore especially the effects on concern. .
by DrDubious
This sounds more like a philosophical or religious proposition.
Mr. Cain seems to confirm it it by stating …”It is my belief…” Yes, this a question of belief. The underlying assumption is that the human “mind”, whatever it is and however it is produced, is unique in the universe and has properties that are beyond the physical. As with any religious belief, arguments can go on forever.
Other questions seem to come before this one: Why would an artificial mind try to develop a Theory of Relativity in the first place? Why would it be curious? What would be its motivation to do anything? These are emotional/psychological questions that are intrinsic to our biological/social structure. Unless we program artificial emotions and psychology, an AI may have no interest in any of the the things we place so much value upon. Rather than discover brilliant new theories, our new AI might just stay awake long enough to realize how pointless its existence is, and shut itself down a nanosecond later.
On the other hand, if it was curious and motivated, only our hubris prevents us from believing that it could not eventually equal Einstein.
by Tony Stender
This is an interesting question because it mixes metaphors of observation and meaning. For some reason, I do not ever see the discussion of Metaphysics and Epistemology clarified or considered in the discussions on this forum.
The observation is one thing for Einstein, then he devises the epistemology experiment, after which he does the math, and finally he contextualizes and draws a conclusion as to the personal meaning of this entire process, which he then publishes to the world to establish how this new conclusion affects the thinking of the other scientists, then I assume he waits for input to confirm or deny his process and conclusions.
In my opinion dissecting this process allows for simpler analysis and detailed discussion for defining the accuracy and placement of any observed errors.
Comments?
by Mike
“If you wish to make an apple pie from scratch, you must first invent the universe.” — Carl Sagan
Eventually an AI may cover all the possibilities.
by James
The fundamental that differentiates an AI and a human is that a human can die. A human know that nothing else matters more than his life. At this point it seems to me that no AI has the appreciation for what it does every executing moment. Change that an you’ll change AI human differences in a flash!
by Justin
That would be difficult to do because of that fact that an AI would not die. You would need to find another, equally valuable concept for the AI to use in the way we use the concept of life and death.
Very good idea though
by not Bob
From an AI perspective it would most certainly see its life as limited, it would recognize that the energy required to perpetuate it is not infinite or eternal and would recognize that at some point in the future it could die if its energy source ran dry. Its life could be counted in thousands of years and still not be eternal where a human might think life is eternal if it lived for 1000 years an AI would not. AIs are not immortal any more than any other physical being in the universe it is only our limited capacity to grasp the length of time that makes us think it is.
by Justin
A good change of perspective. Thank you.
It is difficult to imagine something worrying about its energy source dying out when it is the sun. Minds like that would be looking so far into the future that their short term plans would be measure in the thousands of years while a long term plan might be measured in millions or even billions of years, unless the light speed limit can be broken which I hope it can because that opens up so much potential for the universe. Other galaxies maybe?
by GatorALLin
Just as a side comment….. I always loved the fact that Einstein was a patent clerk and wondered if his brain was ideal for this from the very beginning because he was a problem solver in his head, or if being a patent clerk helped him train his brain on how to think about problems in new/creative ways all the time. A patent has to be new or not obvious to someone skilled in the art and even if he was great at this skill already, I was thinking he must have hyper-trained his brain to problem solve. Wondering how much that helped him come up with the theory of relativity for example. Maybe we need to put some AI on patent clerk efforts if you expect AI to be great problem solvers…..?? Understanding prior art and understanding what is new and creative about this idea/solution should prove helpful in my humble opinion. I would suggest it is no coincidence that Einstein was a patent clerk both before….and then after this training it changed how is brain would look at problems and solving them. I agree with the other comments below that Einstein was not working in a vacuum, but in group settings that would let him bounce off ideas, etc.
by asiwel
Watson .. for patent clerks!
by GatorALLin
* great idea… Love it!
by alex hill
Let me give you some historical input, which backs the notion that there was a team effort behind Enstein, as well as the benefits of computers over humans. First the question of relativity was brought up by the experiment of Michelson & Morley regarding the speed of light, which showed that the measured speed of light is always the same, no matter the speed of the observer. Then came some very interesting correspondence on this subject between Oliver Heaviside and Goerge Francis Fitzgerald. Then Henrik Anton Lorentz developed the concepts of the first two into mathematics, and then Poincaré added his ideas. Einstein only added the definition of relativistic momentum, by which the Lorentz transformation is made compatible with the law of conservation of momentum. That was his only contribution, blown out of proportion by the media… This regarding special relativity.in 1905. But in 1915 he developed General Relativity, with the wrong mathematics, because the geometry he used was that of Riemann, which lacks the torsion tensor to make it totally consistent. This was suggested to Einstein by Elie Cartan, the French mathematician, in the early 20´s and which completed this geometry, but Einstein was already famous at the time and paid no attention. By that time, Eisntein had become an icon, and nobody dared to contradict him, but if you test his field equation with computational math packages they signal the presence of mathematical mistakes, as was carried out a few years ago by members of the Alpha Institute of Advanced Study (see http://www.aias.us ). This human element of intellectual ego by Einstein caused a 50 year delay in the progress of theoretical physics. If this development would have happened by means of an advanced computer, probably no egos would have been involved and we would have had a mathematically correct General Theory of Relativity since the mid 20´s and not until 2010. Unless…. such bright computers also hold a computerized ego…
by GatorALLin
**Loved this comment…. good stuff….
by Z W Wolf
You had my interest until I had a look at that self-serving, crackpot website you linked to.
http://en.wikipedia.org/wiki/Einstein%E2%80%93Cartan%E2%80%93Evans_theory
by alex hill
What a pity you believe everything written by vested interests in Wikipedia. By the way, interestingly enough, Wikipedia does not mention that Myron Evans was proposed to the Queen by the Royal Society for the Civil List as an award for his contributions to British science, which the Queen and British Parliament awarded to him in 2005, and that only Roger Penrose, Steven Hawking and Myron Evans have received nobility titles from Queen Elizabeth II for their contributions to world science, the heraldic proof (shield) of this can be seen on the first page of the AIAS website. Perhaps you should read some of the papers included in that website, if you understand vectors and tensors, instead of believing at face value the standard dogma strictly followed by Wikipedia which, by the way, is no longer accepted as a reference in scientific courses, did you know?
by Luigi Taylor
Better define “ego” before continuing
by Bri
I seem to remember that I was scorned for saying Eistein was not that great at math and almost wasn’t able to figure out the mathematics.. The general tone of my post was that , what Einstein achieved was monumental, but overall he wasn’t that much more intelligent than the rest of the many gifted scientist out there. It’s more of a cult of personality than an unprecedented genius. He really came close to failing. Then no one would have heard of him.
by AGreenhill
He was pretty poor in school as well… and when you read about all the papers that he read while working at the patent office – it really opens your eyes to how little of a leap he made. Contemporary physicists got him right up to the edge…
by Bri
Watch it now! That’s heresy!
by Jackus
I recommend you guys to read this essay:
http://www.pivot.net/~jpierce/like_the_gods.htm
When everyone has augmentation and access to all knowledge (via the Net), everyone is a genius. The world don’t need celebrities (who are remembered as genius, by-nature, born-smarter-than-average, so on). The world need people who can actually make scientific/technological breakthroughs.
by ghandchi
Hi Ray,
As far as hw is concerned, the CPU speed of 2029 or 2040 may give us different results. As far as SW is concerned, the results depend on what the AI system is modeled after which I noted in review of your book:
http://www.ghandchi.com/730-kurzweil-prtm-eng.htm
Finally different AI Systems would give different results just like today’s search engines that almost all use the same CPU speed but use different algorithms and thus give different results. In other words, some may come up with theories even more interesting than Einstein’s and some may be pretty useless. So CPU speed by itself will not determine the result.
As far as *purpose* is concerned, depending on the algorithms AI Systems use, they may define some kind of self purpose when interacting and theorizing about the world and will communicate it to others. There is no reason why AI Systems that choose to focus on physics not to communicate with their peers while at the same time not to reveal their own algorithms, which certainly cannot stop others from trying to reverse engineer their “brains”.
Best Regards,
Sam
by tedhowardnz
Bob speaks of how an AI would get a sense of purpose.
It seems to me that it would get one in exactly the same way that we get one as human beings, by a complex set of interaction with other individuals, with culture, and with distinctions derived from our own sets of percepts and concepts; with every bit of stored information having an influence on the probability of the next action of the system as a whole, and those probability functions being totally dependent on the operating context of the instant.
All such systems must be uniquely individual.
It seems clear to me that any AI that manages to achieve distributed “holographic” storage and recall of information will have “intuitions” very similar to those that we have.
Thus I see nothing in Bob’s question to indicate that AI will be in any significantly way different from human beings (other than in speed and capacity of course). It will be a very smart individual, and a member of many communities of individuals; like we are.
by WLGJR
IMO, we tend to overestimate such things like “emotions”, “intentions” and other similar mental properties.
I recommend people to read the OpenCog Wiki (run by Ben Goertzel), as it may turn out that those *complex mental functions* are not mysterious or even complex afterall, but can be understood after careful analysis.
by Z W Wolf
How is this a thought experiment? You state your opinion that AI is fundamentally different from the human brain, then simply restate this opinion in an example where you wonder how a fundamentally different kind of mind could come up with the same results as a human brain. You seem to think this is support for your argument.
by Z W Wolf
In your latest reply you say, “…it had to do with the ability of an AI system to have a “sense of purpose” of its own without human intervention. My question had to do with how an AI system would decide, without human assistance, that there is any reason to want to know the exact relationship between matter and energy.”
Your clearly assuming:
1. That a sense of purpose is impossible in an AI (because it is fundamentally different.
2. That a human like sense of purpose would be necessary.
These are just opinions with no support. I suspect that you what are really arguing for is vitalism.
by AGreenhill
Exactly. I’m curious as to why this even made it onto the website. Surely something intelligent has made it to the editor’s desk.
by GatorALLin
Can’t we test this theory, by giving an AI machine all the other known info except the E=MC *2 formula and see how long it takes to have it discover this? I am sure you can then go back and have the AI try to figure out the formula again…and again, and how you teach it to look will tell you a lot about how to improve the system to then help find the next formula (not yet discovered for something else).
I have always been concerned that AI systems are missing a critical component for learning that humans depend on. That is humans have pain or pleasure sensors for learning and AI seems to be missing this. Can an AI system use other reward or punishment systems to fake pain or pleasure and thus still learn effectively?
by WLGJR
“… That is humans have pain or pleasure sensors for learning and AI seems to be missing this.”
Can you (or anyone else) build an artificial limbic system (including hippocampus, amygdala, and perhaps other parts)?
Do we have to build emotions (which is essential in some forms of learning) into our AIs?
by Z W Wolf
No AI – as they exist today – can do this. But this is like arguing – in 1916 – that supersonic flight is impossible. “Drag increases exponentially thus you would have to increase thrust exponentially. No engine exists that could produce this kind of thrust, and the airframe would come apart anyway.”
That argument is entirely true for a WWI era biplane. Future AI’s will be very different from present day types.
by vaidy bala
AAIs ahve come along way. Consciousness and imagination
by jpk
Poincaré
by clains
I’d say, rather obviously that consciousness is the best candidate for something machines can’t do (within our current understanding). maybe we have the tools we need, maybe not. it’s hard to tell before we know whether consciousness is some additional thing that does something interesting or not. some like to proclaim the discussion ended at the present time, but cognitive science is just now beginning to see the operations of the brain as a whole, and consciousness, if it does anything interesting, will only have started to show up in cognitive science recently (and indeed, it is only in the last 10 or so years that consciousness has become a serious and rather mainstream topic, so it’s definitely immature to rule it out before we start to get a better practical grasp of the wholebrain theories that just recently, in the last five or ten years, have been proposed)
by Rollie
Excellent discussion. As Einstein said, imagination is essential to science. Without imagination we cannot even form the question. And as many here have indicated, imagination is a social enterprise. Do we know enough about imagination to program it into AI?
by Bill Lauritzen
“The Einstein theory could have been formulated as soon as we discovered the finite velocity of light, in 1676. It should be noticed that this last discovery was also overdue, as it did not require experiments to establish the finite velocity of light. It was sufficient to establish the meaningless character of ‘infinite’ velocity, which on symbolic grounds, could have been accomplished much earlier, and to conclude, that the velocity of light must be finite.” Alfred Korzybski, 1933.
by Strin
Discovering science theories is really an “inference” problem. Theoretically, a robot with sufficient inference ability should be able to arrive at all possible conclusions entailed by the its knowledge. However, bayesian inference is NP-Hard, which means it might take a long time for relativity theory to be developed. I think it can be not hard to develop an artificial brain to solve inference problems of ordinary life, but hard for a near optimal one that develop state-of-the-art science theories.
by Bri
I don’t remember the name of the AI system that achieved the result that I’m about to describe, but it was fairly recent. The AI system deduced the laws of thermodynamics on it’s own, in a relatively short period of time, and it was given experimental information on thermodynamic systems. That may not be as difficult a problem to solve but it took humans quite awhile to deduce on their own. Another aspect that I don’t think is being considered is the speed and quantity of information that a human level AI would have access to. It will resemble human intelligence in how its thinking processes work, but it will be able to access far larger sets of data. It may be on a par with us in it’s reasoning abilities but it will be far faster and be able to ” visualize” far greater amounts of information. Given the problem of relativity it may resemble the character from Good Will Hunting. It might be Childs play and like the character, feel that the solution is obvious. Einstein also was human, so he fatigued and needed rest and diversions. The human level AI would not be bound by such limitations. I think the question doesn’t take into account the inherent differences between machines and humans. Just the difference between a slide rule and a calculator illustrates the fundamental facility of machines as opposed to more mundane methods of processing raw data. The question sets up a scenario that is too simplistic. It’s not even possible to run as a mind experiment because there isn’t enough known about how a human level AI will perform.. My vote is toward the AI surpassing Einstein, even right out of the box.
by Marius
Theory of relativity by AI may be closer we think: http://www.wired.com/wiredscience/2009/04/newtonai/
by Bruce Gavin Ward
personally i don’t think the human being exists that could find a way to keep the described AI away from the internet!
by Khalid
Thank you for this medium to interact with Ray. I think he is right. A world with infinite Einsteins is a totally different world.
by WLGJR
And Infinite Newtons, Kurzweils and more. And Georg Cantor’s “Infinity of Infinities”.
http://en.wikipedia.org/wiki/Transfinite
David Hilbert: “No one shall expel us from the Paradise that Cantor has created.”
by Jaap van Till
Maybe you should introduce another view on the question you pose. The point is that Albert did not make that creative leap on his own. He was part f a very vivid group of scientists in Bern, his wife helped with the tensor math and he discussed by letters and visits with scientists all over Europe.
Especially Prof. Zeeman in Leiden gave him a lot of ingredientes from which the famous formula by our hindsight is now easily derivable.
My point is that innovation is soen by very well interconnected group of humans. Maybe such groups can be supported by groups of computers in the near future that together make breakthrough new knowledge. I have published a sheet lecture on slideshare about such Weavelets.
by Elemee
Excellent point, @Jaaap van Til. We must get over the story of these discoveries and advances being the solitary genius of a single individual. All advances have occurred in the context of a much larger social fabric that also weaves in concepts and hypotheses from disparate areas and disciplines. All that seems critical for the emergence of the transformational ideas that get memorialized as the work of one individual. I think this is important for the way it reinforces the collaborative model as the best way into the future.
by Gabriel
I most definitely agree….the idea of a Bruce Wayne type figure; someone with a innate genius that revolutionizes something single-handedly is very popular heroic image that we all wish we were or true to aspire too, but it’s not really realistic or true….yes, Einstein was ahead of the curve, but he didn’t create the vast innovations all by himself…as time goes on, especially in the social media world we already live in, I think that image will become properly antiquated.
It’s not about giving you your due credit, but about realizing, and perhaps, giving credit to all the others who helped get you there; that you were, let’s say innately, creative enough to have these amazing ideas, but needed the help of a team to make them a practical reality is nothing to be ashamed of. As before, it’s a very old and romantic ideal to aspire to be, even though a much better and realistic one is the realization that it’s rarely ever the case, and our success is almost always the result of everyone else helping us along the way, even if it’s just our name that gets up in lights.
by Mr.X
Your first parapgraph hit the nail on the head.
by Gabriel
In other words, my second paragraph needed work >_<
by Mr.X
“needed” !?
I don’t see how it changed from the last time I saw it ;)
by sylwester ratowt
Great point by both Jaap van Till and by Elemee. No one brain comes up with any idea, there is always a community involved and a historical context.
An additional point is that the original question states “Let it read everything in existence.” This assumes that there is only one way to read any given text. But Einstein, just as anyone else, was thought by the people he interacted with and by his experiences, how to read those texts. We all experienced situations in which we disagreed about an interpolation of a text even with people that have very similar education and background to us. To me the interesting question is how could we teach an AI system the the experiences that a person over a century ago head, so that it could have a change to come up with that theory.