Ask Ray | How to Create a Mind thought experiment
February 10, 2013
I just finished reading How to Create a Mind. I found it both interesting and informative. At the end, I believe that there is an inherent difference between a human brain and an AI system, a difference that can’t be overcome by any amount of added speed and capacity. To illustrate this difference I have included a thought experiment:
Take the most powerful artificial brain in existence. Include all programs necessary to make it function as an independent, self-conscious entity. Let it read everything in existence up to, but not beyond, the birth of Albert Einstein.
With no further human intervention of any kind, how long do you think it will take this artificial brain to develop the theory of relativity?
Feel free to use the artificial intelligence capability you think will exist in 2029; but, again, limiting the knowledge input to that which was available to Einstein.
It is my belief that the actual human brain is sufficiently different from an artificial intelligence system that without any human intervention this theory would never be forthcoming. If you believe otherwise, I would be interested in seeing the process modeled.
Again, since the artificial intelligence system is a self-conscious entity, presumably capable of self-direction, I would expect no human intervention whatsoever in this process.
— Bob Caine
Interesting point but keep in mind that all — biological — human brains at the time (except for Einstein’s) did not come up with relativity either.
Einstein’s brain was ahead of the curve, but nonbiological intelligence will continue to improve both in hardware and software (algorithmically) past 2029.
So perhaps it is the AI of 2035 or 2040 who would be able to come up with relativity in your thoughtful thought experiment.
— Ray Kurzweil
My point in using Einstein’s Theory of Relativity in my thought experiment on AI equivalence to the human brain was not related to whether or not Einstein had the support of others or how exceptional his mind was.
Rather, it had to do with the ability of an AI system to have a “sense of purpose” of its own without human intervention. My question had to do with how an AI system would decide, without human assistance, that there is any reason to want to know the exact relationship between matter and energy; the relationship between the speed of light and the relative motion of those observing that light; or, for that matter, the relationship between the cosmic microwave background and the Big Bang.
Given the task, I can readily see the role an AI system could play in deriving a solution. But how would it decide on its own that studies such as these should even be undertaken and then design, execute, and assess the related research to arrive at a verifiable theory?
— Bob Caine