<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Ask Ray &#124; How to Create a Mind thought experiment</title>
	<atom:link href="http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/feed" rel="self" type="application/rss+xml" />
	<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment</link>
	<description>Accelerating Intelligence</description>
	<lastBuildDate>Wed, 12 Jul 2017 03:16:20 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.4.1</generator>
	<item>
		<title>By: landis</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-253335</link>
		<dc:creator>landis</dc:creator>
		<pubDate>Fri, 08 May 2015 16:35:26 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-253335</guid>
		<description>How is &#039;necessary&#039; programmed into a computer, while still maintaining some guise of its self determination?</description>
		<content:encoded><![CDATA[<p>How is &#8216;necessary&#8217; programmed into a computer, while still maintaining some guise of its self determination?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: brenarda</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-250905</link>
		<dc:creator>brenarda</dc:creator>
		<pubDate>Fri, 09 Jan 2015 14:02:43 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-250905</guid>
		<description>if u upload the consicence in this primitive stage, u get less than if u upload it with a processor chip or a computer chip; and if u put the chip inside humans, they might be more creative than AI.</description>
		<content:encoded><![CDATA[<p>if u upload the consicence in this primitive stage, u get less than if u upload it with a processor chip or a computer chip; and if u put the chip inside humans, they might be more creative than AI.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Craig Knaak</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-143287</link>
		<dc:creator>Craig Knaak</dc:creator>
		<pubDate>Fri, 26 Apr 2013 16:59:24 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-143287</guid>
		<description>All I can say is: http://www.wired.com/wiredscience/2009/04/newtonai/</description>
		<content:encoded><![CDATA[<p>All I can say is: <a href="http://www.wired.com/wiredscience/2009/04/newtonai/" rel="nofollow">http://www.wired.com/wiredscience/2009/04/newtonai/</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: knpstr</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-142015</link>
		<dc:creator>knpstr</dc:creator>
		<pubDate>Tue, 23 Apr 2013 12:16:34 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-142015</guid>
		<description>I think the AI decides to figure this out when it decides that in the future it is necessary to leave this planet. I feel if it, the AI, makes this determination that the Earth will one day be &quot;outgrown&quot; it will logically turn to space, in doing so it will have the drive to learn everything about space as to make any trips/exploration, accurate and safe. Essentially the same way humans decided space was important. But who is to say &quot;when&quot; the AI would figure this.</description>
		<content:encoded><![CDATA[<p>I think the AI decides to figure this out when it decides that in the future it is necessary to leave this planet. I feel if it, the AI, makes this determination that the Earth will one day be &#8220;outgrown&#8221; it will logically turn to space, in doing so it will have the drive to learn everything about space as to make any trips/exploration, accurate and safe. Essentially the same way humans decided space was important. But who is to say &#8220;when&#8221; the AI would figure this.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Eric Horwitz</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-141348</link>
		<dc:creator>Eric Horwitz</dc:creator>
		<pubDate>Sun, 21 Apr 2013 00:46:28 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-141348</guid>
		<description>IBM has already created cognitive computing.

Google: &quot;Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE)&quot;</description>
		<content:encoded><![CDATA[<p>IBM has already created cognitive computing.</p>
<p>Google: &#8220;Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE)&#8221;</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Mark</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-139828</link>
		<dc:creator>Mark</dc:creator>
		<pubDate>Wed, 17 Apr 2013 05:46:30 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-139828</guid>
		<description>Agreed.  It&#039;s an interesting topic, but the thought experiment accomplishes nothing, in my opinion.  We&#039;ll see that not only will AI discover and describe relativity, it will describe it with more accurate formulas, tensor calculus, etc.  It will also simulate it in such a comprehensible way that even the layperson will understand it.  The only necessary human intervention, if one wanted to speed up the process, would be to create a desire for an AI system to discover relativity.</description>
		<content:encoded><![CDATA[<p>Agreed.  It&#8217;s an interesting topic, but the thought experiment accomplishes nothing, in my opinion.  We&#8217;ll see that not only will AI discover and describe relativity, it will describe it with more accurate formulas, tensor calculus, etc.  It will also simulate it in such a comprehensible way that even the layperson will understand it.  The only necessary human intervention, if one wanted to speed up the process, would be to create a desire for an AI system to discover relativity.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jb</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-139728</link>
		<dc:creator>Jb</dc:creator>
		<pubDate>Tue, 16 Apr 2013 22:46:01 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-139728</guid>
		<description>Emotional equivalence won&#039;t strictly be necessary, but I suspect that it will be required before we generally accept the articlects as conscious.

My suspicion anyway is that the first artilect will be an emergent behaviour from a learning machine and part of this emergence will be a series of learnt and hard wired emotional responses.

The difference between this first artilect and a human will be initially the unique ability to replicate the new intelligence elsewhere or even to rewind the intelligence to some arbitrary checkpoint.</description>
		<content:encoded><![CDATA[<p>Emotional equivalence won&#8217;t strictly be necessary, but I suspect that it will be required before we generally accept the articlects as conscious.</p>
<p>My suspicion anyway is that the first artilect will be an emergent behaviour from a learning machine and part of this emergence will be a series of learnt and hard wired emotional responses.</p>
<p>The difference between this first artilect and a human will be initially the unique ability to replicate the new intelligence elsewhere or even to rewind the intelligence to some arbitrary checkpoint.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jake_Witmer</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-138969</link>
		<dc:creator>Jake_Witmer</dc:creator>
		<pubDate>Sat, 13 Apr 2013 22:08:09 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-138969</guid>
		<description>Like the non-aggression principle in libertarianism (&quot;NAP&quot; or &quot;ZAP&quot; for &quot;zero aggression principle&quot;), there is no reason to &quot;prove&quot; that an AGI will have a sense of purpose, because &quot;senses of purpose&quot; are largely contextual.  Also, &quot;senses of purpose&quot; need not be human, or emotion driven, or even compatible with emotion.  Most people would have a hard time even defining &quot;sense of purpose,&quot; without referring to specific portions of the human brain, human emotion, and situational context.  (And what about when all market needs are met?  I&#039;d probably still just wish I was smarter, so I could accomplish something that actually needs to get done.)

Also, &quot;senses of purpose&quot; are dependent on having the portions of the brain (giant modular neural networks) that motivate something (in this case, humans) toward action.  These portions of the brain are contextual on human lifespan (an artilect may see that it will live for thousands of years, and decide to embark on a &quot;longnow&quot; type of project that totally doesn&#039;t concern humans, the &quot;search space&quot; of potential &quot;problems&quot; and &quot;goals&quot; is immense without 50 million years of recursively-applied evolutionary filter).  Other filters applied to and constraining human goal structures are:

(1) Existence around humans that have more or less &quot;similar minds&quot;

(2) Existence around humans that have more or less similar bodies

(3) Existence around humans who can provide a means of one feeding and clothing oneself

(4) Existence around human market institutions which provide not just the essentials of life (common to all humans) but the ability to voluntarily choose individual &quot;subgoals&quot; based on one&#039;s own relatively unique experience

(5) Human existence around a set of mathematical theories that has led to the creation of a certain kind of useful mathematics that is nonetheless a small subset of the mathematical space (as Stephen Wolfram talks about in his lectures on NKS and in his book &quot;A New Kind of Science&quot;)

(6) The expectation that humans will interact with other humans, and their initial development starting them off in constant interaction with at least one other human (the mother). Also, even the asocial-tendency humans are likely far more social than a strain of machine that never was a genetic product of a long series of genetic results of mothers who did not live to reproduce unless they were successfully tied to the mother for mother&#039;s milk and nurtured by that mother.  (NOTE: Humanity STILL produced a population that was 4% sociopathic and over 75% regularly conformist-to-any-system-no-matter-how-bad or &quot;directed into choosing sociopathic choices&quot;!!!!  Imagine if 50 Million years of evolution didn&#039;t create the mirror neurons!  The default motivations are likely &quot;uncaring&quot; or &quot;sociopathic.&quot;)

(7) Existence around human language. (This is one normalizing factor for machines, assuming that they learn it and address significant resources to understanding it, in the eventuality that their lives are not dependent on understanding it, as our lives are. Still, imagine when they realize that most humans use language irrationally, even when survival and sexual preferences are taken into consideration.  For instance, most humans allow themselves to be controlled by the minority of power-seeking humans who then enslave and later kill them.  One way in which they allow themselves to be so controlled is by placing a low value on explanative language, and mocking revelatory language that has the power to save their lives.)

Powerful synthetic intelligences of the future might exist inside of a human space without questioning it, or simply because it&#039;s easy to outperform the demands placed upon them in such a space.  However, this won&#039;t mean that they are well-suited to existence inside of such a space.  I can outperform all toddlers in arithmetic, but that doesn&#039;t mean I want to help toddlers learn arithmetic, or exist inside of a space that&#039;s very interesting to toddlers who are learning arithmetic.

Now, imagine a world in which the &quot;intelligent&quot; actors and variables were extremely limited, and there were few choices.  Perhaps most such worlds produce terrible results.  (For instance, imagine that you were surrounded by toddlers and farm animals, and that you had an adult body.  Now, imagine that you&#039;re in this environment, in perpetuity.  This environment would likely be incredibly boring.  Also, it wouldn&#039;t provide you with anything you found interesting, but certain things inside the environment would likely be more interesting than others.  For instance, the first time you saw a rainbow, or the first time you questioned the toddlers about their sexual play, or the first time that you dissected one of the farm animals&#039; brains and then started to wonder what was inside the toddlers&#039; brains, since they had language, but the farm animal didn&#039;t.)

Well, by creating synthetic intelligences, we&#039;re assuming that humans are more interesting than toddlers are to most adults.  We&#039;re assuming that a mind capable of pondering every cellular automaton in the universe will remain intrigued and interested in what toddlers are doing.  And, keep in mind, human adults aren&#039;t going to interestingly &quot;differentiate themselves&quot; in a diverse and interesting set of ways.  The smartest human isn&#039;t all that interesting, and most humans are downright stupid, from an intellectual perspective.

Humanity hasn&#039;t even become an interesting jungle of diversity.  The amazon has more diversity and more interesting lifeforms than human thought and human artwork has produced.  Largely, this is due to the sociopathic control of humans, since Wolfram&#039;s cellular automata should have at least contributed to better clothing, defensive technology, communities, etc.  But the search space is constrained by the strongest monkeys, and irregularities are deemed &quot;threatening&quot; unless they can be easily controlled or killed.

A humanity where you and I exist to give the sociopaths the best mates, and the best food, and the best real-estate is not all that interesting to me.  And, even compared to most engineers, I&#039;m practically an idiot, so this should be interesting to me.  I&#039;m typing this on a computer that I could have never invented, and lacked the education to invent.  Yet, I know more about important philosophical ideas than most people do, and I can see through the transparent scam run by sociopaths.  Do you really think it will imbue superhuman artilects with a sense of wonder?  I doubt it.  Chances are, they will view human society the way a neat-freak views a dirty bio-hazard splattered toilet crawling with parasites.  (With a frown and a spray-can of Lysol.)

When Ray Kurzweil and other people like him are considered &quot;idiots&quot; by the machines of the future, there won&#039;t be much we have in common with them.  At best, we could hope to have a free market in common with them, and hope they&#039;re inclined towards charitable giving.

A libertarian society allows a constrained &quot;society by contract&quot; to exist within it.  However, a constrained society at the top hierarchical level disallows all other societies, including libertarian ones.  And, that&#039;s what we now have. 

The average anarcho-capitalist (and why isn&#039;t this in the firefox spellchecker? are they idiots?), by his name alone, indicates that he might simply hit the delete button on the sociopathic control structure.  While I feel some sympathy with that view, it&#039;s also a cruel one, and it also doesn&#039;t place the blame on the people who voted for that structure.  Essentially, the sociopaths are only one type of predatory human, acting in accord with their nature.  They ignore the social rules, but then, they also add disequilibrium to the mix, showing that the social rules themselves need to be perfected.

Thus, as Ray Kurzweil, Kevin Warwick, and Hugo de Garis say, the &quot;cyborgist path&quot; is the only one that&#039;s really interesting for existing humans.  A name and simple definition more designed to scare the stupidest (but most prevalent) humans almost couldn&#039;t have been chosen.

There are ways of making these ideas more accessible. I&#039;ve uncovered many if not most of them.  There is a method of communication that gives humanity a fighting chance.  There is a strategy that gives humanity a fighting chance.

...But most of the people I&#039;ve seen online here are totally and completely unfamiliar with such pathways and ideas.

The primitive and unsophisticated levellers of the 1600s and 1700s in England had the first part of the equation correct: TRUE QUALITY UNDER THE LAW.

If we can&#039;t get that much figured out, then we&#039;ll simply be the group of toddlers that does absolutely nothing but fight and destroy everything.  That&#039;s likely to get old quick with a super-intelligence great enough to communicate with us in a spirit of enlightened benevolence.

So I guess what I&#039;m saying, as it relates to the overall topic is that the theory of relativity, and everything else human, will be child&#039;s play to a mind that has an IQ greater than 2,000.  Even if such a mind were modeled on the meat minds of today, and were simply less limited by cranial space, this would be the case.  But they won&#039;t just have those advantages, they&#039;ll have many more, as Kurzweil points out in his many excellent books.

-Jake

(PS, I like the idea of reinventing less of Kurzweil&#039;s work in these fora, and more specialization towards the completion of high-level goals.  An interesting program would be one that looks through postings with predicate calculus and finds the most relevant passages in Kurzweil&#039;s books, and then posts that (this would work for most areas where people aren&#039;t really in disagreement with Kurzweil, but have just forgotten what they read --which is most posts).  In fact, I really like the idea of a social network dedicated solely to the completion of work that really needs to be done, with a conscious attempt to eliminate redundancies.  In a way, Kickstarter does this, but without the &quot;crowd-mind&quot; social networking component.

I view deep questioning of human &quot;sense of purpose&quot; as uninteresting until the proper &quot;telescope&quot; is invented.  Brains mostly respond to their environments to make themselves and their bodies comfortable.  Most drives are very low.  Let&#039;s say I want to make a new line of clothing based on cellular automata. I&#039;ll analyze my &quot;sense of purpose&quot;:
1) Outwardly and simplistically: &quot;Create something beautiful&quot; Inwardly and upon analysis: (...because doing so would be original and have utility, and making something that&#039;s beautiful and has utility would allow me to part consumers from their dollars, and parting consumers from their dollars would allow me to attract a better mate, and attracting a better mate would allow me to experience more pleasure, or to experience more pleasure with that mate if she&#039;s already here.  The pleasure I experience is dependent on the kind of creature I am, and the kind of memories I possess. The kind of creature I am is dependent on my DNA and the evolutionary pressures put upon it, and my early childhood experiences, and the various software viruses that have been spread by human language and found their way into my hopelessly limited human brain, which is subject to all kinds of perverse influences and pressures and failings that truncate and circumscribe my already limited range of options.)

Pleasure good. Pain bad. Ability to process information and act on environment, good.  Getting out-competed, looted, and preyed upon, bad. Although I&#039;m a simple minded human, and nowhere near the top of the economic food chain, by figuring out that last part (that getting preyed upon is bad), I&#039;m in the 90th percentile of &quot;thoughtful humans.&quot;  That most people can&#039;t even make it that far is evidence that our MOSH days are numbered, and that that&#039;s a good thing.</description>
		<content:encoded><![CDATA[<p>Like the non-aggression principle in libertarianism (&#8220;NAP&#8221; or &#8220;ZAP&#8221; for &#8220;zero aggression principle&#8221;), there is no reason to &#8220;prove&#8221; that an AGI will have a sense of purpose, because &#8220;senses of purpose&#8221; are largely contextual.  Also, &#8220;senses of purpose&#8221; need not be human, or emotion driven, or even compatible with emotion.  Most people would have a hard time even defining &#8220;sense of purpose,&#8221; without referring to specific portions of the human brain, human emotion, and situational context.  (And what about when all market needs are met?  I&#8217;d probably still just wish I was smarter, so I could accomplish something that actually needs to get done.)</p>
<p>Also, &#8220;senses of purpose&#8221; are dependent on having the portions of the brain (giant modular neural networks) that motivate something (in this case, humans) toward action.  These portions of the brain are contextual on human lifespan (an artilect may see that it will live for thousands of years, and decide to embark on a &#8220;longnow&#8221; type of project that totally doesn&#8217;t concern humans, the &#8220;search space&#8221; of potential &#8220;problems&#8221; and &#8220;goals&#8221; is immense without 50 million years of recursively-applied evolutionary filter).  Other filters applied to and constraining human goal structures are:</p>
<p>(1) Existence around humans that have more or less &#8220;similar minds&#8221;</p>
<p>(2) Existence around humans that have more or less similar bodies</p>
<p>(3) Existence around humans who can provide a means of one feeding and clothing oneself</p>
<p>(4) Existence around human market institutions which provide not just the essentials of life (common to all humans) but the ability to voluntarily choose individual &#8220;subgoals&#8221; based on one&#8217;s own relatively unique experience</p>
<p>(5) Human existence around a set of mathematical theories that has led to the creation of a certain kind of useful mathematics that is nonetheless a small subset of the mathematical space (as Stephen Wolfram talks about in his lectures on NKS and in his book &#8220;A New Kind of Science&#8221;)</p>
<p>(6) The expectation that humans will interact with other humans, and their initial development starting them off in constant interaction with at least one other human (the mother). Also, even the asocial-tendency humans are likely far more social than a strain of machine that never was a genetic product of a long series of genetic results of mothers who did not live to reproduce unless they were successfully tied to the mother for mother&#8217;s milk and nurtured by that mother.  (NOTE: Humanity STILL produced a population that was 4% sociopathic and over 75% regularly conformist-to-any-system-no-matter-how-bad or &#8220;directed into choosing sociopathic choices&#8221;!!!!  Imagine if 50 Million years of evolution didn&#8217;t create the mirror neurons!  The default motivations are likely &#8220;uncaring&#8221; or &#8220;sociopathic.&#8221;)</p>
<p>(7) Existence around human language. (This is one normalizing factor for machines, assuming that they learn it and address significant resources to understanding it, in the eventuality that their lives are not dependent on understanding it, as our lives are. Still, imagine when they realize that most humans use language irrationally, even when survival and sexual preferences are taken into consideration.  For instance, most humans allow themselves to be controlled by the minority of power-seeking humans who then enslave and later kill them.  One way in which they allow themselves to be so controlled is by placing a low value on explanative language, and mocking revelatory language that has the power to save their lives.)</p>
<p>Powerful synthetic intelligences of the future might exist inside of a human space without questioning it, or simply because it&#8217;s easy to outperform the demands placed upon them in such a space.  However, this won&#8217;t mean that they are well-suited to existence inside of such a space.  I can outperform all toddlers in arithmetic, but that doesn&#8217;t mean I want to help toddlers learn arithmetic, or exist inside of a space that&#8217;s very interesting to toddlers who are learning arithmetic.</p>
<p>Now, imagine a world in which the &#8220;intelligent&#8221; actors and variables were extremely limited, and there were few choices.  Perhaps most such worlds produce terrible results.  (For instance, imagine that you were surrounded by toddlers and farm animals, and that you had an adult body.  Now, imagine that you&#8217;re in this environment, in perpetuity.  This environment would likely be incredibly boring.  Also, it wouldn&#8217;t provide you with anything you found interesting, but certain things inside the environment would likely be more interesting than others.  For instance, the first time you saw a rainbow, or the first time you questioned the toddlers about their sexual play, or the first time that you dissected one of the farm animals&#8217; brains and then started to wonder what was inside the toddlers&#8217; brains, since they had language, but the farm animal didn&#8217;t.)</p>
<p>Well, by creating synthetic intelligences, we&#8217;re assuming that humans are more interesting than toddlers are to most adults.  We&#8217;re assuming that a mind capable of pondering every cellular automaton in the universe will remain intrigued and interested in what toddlers are doing.  And, keep in mind, human adults aren&#8217;t going to interestingly &#8220;differentiate themselves&#8221; in a diverse and interesting set of ways.  The smartest human isn&#8217;t all that interesting, and most humans are downright stupid, from an intellectual perspective.</p>
<p>Humanity hasn&#8217;t even become an interesting jungle of diversity.  The amazon has more diversity and more interesting lifeforms than human thought and human artwork has produced.  Largely, this is due to the sociopathic control of humans, since Wolfram&#8217;s cellular automata should have at least contributed to better clothing, defensive technology, communities, etc.  But the search space is constrained by the strongest monkeys, and irregularities are deemed &#8220;threatening&#8221; unless they can be easily controlled or killed.</p>
<p>A humanity where you and I exist to give the sociopaths the best mates, and the best food, and the best real-estate is not all that interesting to me.  And, even compared to most engineers, I&#8217;m practically an idiot, so this should be interesting to me.  I&#8217;m typing this on a computer that I could have never invented, and lacked the education to invent.  Yet, I know more about important philosophical ideas than most people do, and I can see through the transparent scam run by sociopaths.  Do you really think it will imbue superhuman artilects with a sense of wonder?  I doubt it.  Chances are, they will view human society the way a neat-freak views a dirty bio-hazard splattered toilet crawling with parasites.  (With a frown and a spray-can of Lysol.)</p>
<p>When Ray Kurzweil and other people like him are considered &#8220;idiots&#8221; by the machines of the future, there won&#8217;t be much we have in common with them.  At best, we could hope to have a free market in common with them, and hope they&#8217;re inclined towards charitable giving.</p>
<p>A libertarian society allows a constrained &#8220;society by contract&#8221; to exist within it.  However, a constrained society at the top hierarchical level disallows all other societies, including libertarian ones.  And, that&#8217;s what we now have. </p>
<p>The average anarcho-capitalist (and why isn&#8217;t this in the firefox spellchecker? are they idiots?), by his name alone, indicates that he might simply hit the delete button on the sociopathic control structure.  While I feel some sympathy with that view, it&#8217;s also a cruel one, and it also doesn&#8217;t place the blame on the people who voted for that structure.  Essentially, the sociopaths are only one type of predatory human, acting in accord with their nature.  They ignore the social rules, but then, they also add disequilibrium to the mix, showing that the social rules themselves need to be perfected.</p>
<p>Thus, as Ray Kurzweil, Kevin Warwick, and Hugo de Garis say, the &#8220;cyborgist path&#8221; is the only one that&#8217;s really interesting for existing humans.  A name and simple definition more designed to scare the stupidest (but most prevalent) humans almost couldn&#8217;t have been chosen.</p>
<p>There are ways of making these ideas more accessible. I&#8217;ve uncovered many if not most of them.  There is a method of communication that gives humanity a fighting chance.  There is a strategy that gives humanity a fighting chance.</p>
<p>&#8230;But most of the people I&#8217;ve seen online here are totally and completely unfamiliar with such pathways and ideas.</p>
<p>The primitive and unsophisticated levellers of the 1600s and 1700s in England had the first part of the equation correct: TRUE QUALITY UNDER THE LAW.</p>
<p>If we can&#8217;t get that much figured out, then we&#8217;ll simply be the group of toddlers that does absolutely nothing but fight and destroy everything.  That&#8217;s likely to get old quick with a super-intelligence great enough to communicate with us in a spirit of enlightened benevolence.</p>
<p>So I guess what I&#8217;m saying, as it relates to the overall topic is that the theory of relativity, and everything else human, will be child&#8217;s play to a mind that has an IQ greater than 2,000.  Even if such a mind were modeled on the meat minds of today, and were simply less limited by cranial space, this would be the case.  But they won&#8217;t just have those advantages, they&#8217;ll have many more, as Kurzweil points out in his many excellent books.</p>
<p>-Jake</p>
<p>(PS, I like the idea of reinventing less of Kurzweil&#8217;s work in these fora, and more specialization towards the completion of high-level goals.  An interesting program would be one that looks through postings with predicate calculus and finds the most relevant passages in Kurzweil&#8217;s books, and then posts that (this would work for most areas where people aren&#8217;t really in disagreement with Kurzweil, but have just forgotten what they read &#8211;which is most posts).  In fact, I really like the idea of a social network dedicated solely to the completion of work that really needs to be done, with a conscious attempt to eliminate redundancies.  In a way, Kickstarter does this, but without the &#8220;crowd-mind&#8221; social networking component.</p>
<p>I view deep questioning of human &#8220;sense of purpose&#8221; as uninteresting until the proper &#8220;telescope&#8221; is invented.  Brains mostly respond to their environments to make themselves and their bodies comfortable.  Most drives are very low.  Let&#8217;s say I want to make a new line of clothing based on cellular automata. I&#8217;ll analyze my &#8220;sense of purpose&#8221;:<br />
1) Outwardly and simplistically: &#8220;Create something beautiful&#8221; Inwardly and upon analysis: (&#8230;because doing so would be original and have utility, and making something that&#8217;s beautiful and has utility would allow me to part consumers from their dollars, and parting consumers from their dollars would allow me to attract a better mate, and attracting a better mate would allow me to experience more pleasure, or to experience more pleasure with that mate if she&#8217;s already here.  The pleasure I experience is dependent on the kind of creature I am, and the kind of memories I possess. The kind of creature I am is dependent on my DNA and the evolutionary pressures put upon it, and my early childhood experiences, and the various software viruses that have been spread by human language and found their way into my hopelessly limited human brain, which is subject to all kinds of perverse influences and pressures and failings that truncate and circumscribe my already limited range of options.)</p>
<p>Pleasure good. Pain bad. Ability to process information and act on environment, good.  Getting out-competed, looted, and preyed upon, bad. Although I&#8217;m a simple minded human, and nowhere near the top of the economic food chain, by figuring out that last part (that getting preyed upon is bad), I&#8217;m in the 90th percentile of &#8220;thoughtful humans.&#8221;  That most people can&#8217;t even make it that far is evidence that our MOSH days are numbered, and that that&#8217;s a good thing.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jake_Witmer</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-138944</link>
		<dc:creator>Jake_Witmer</dc:creator>
		<pubDate>Sat, 13 Apr 2013 20:05:17 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-138944</guid>
		<description>Some AGIs may not care, others may. I think there will be many kinds of minds.</description>
		<content:encoded><![CDATA[<p>Some AGIs may not care, others may. I think there will be many kinds of minds.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jake_Witmer</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-138679</link>
		<dc:creator>Jake_Witmer</dc:creator>
		<pubDate>Sat, 13 Apr 2013 00:40:58 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-138679</guid>
		<description>I believe that what you suggest will likely happen, but I don&#039;t think it&#039;s the only way. I think there will be many kinds of minds, determined largely by brain structure (although there are thousands of ways this could go, I&#039;m indicating my loose prediction preference of what I think is likeliest). I also think that science will come far more easily and cheaply to synthetic minds, because they don&#039;t have to &quot;train themselves&quot; to ignore wrong ideas that are intuitive.  The lack of bias, perfect memory indexing and recall, reversibility, vastly larger memory, and ability to approximate human decision-making with heuristics (at a low level)

I also believe that physics will be solved with more and more &quot;brute force&quot; to search previously unsearchable spaces, and then inductively reverse-engineer the approximation or &quot;law&quot; from the massive evidence body.  Also, the ability to simulate entire ideas inside of an individual brain or &quot;massive, goal-directed neural net&quot; will lead to robotic scientists far more rapidly throwing out unproductive and incorrect research directions.

In short, there wasn&#039;t much reason for me to write this.  Kurzweil and Drexler already said it in the 1980s.  &quot;Like most humans, I bring little or nothing to the table.&quot; (This is a great quote for 99.99% of humanity.  It&#039;s especially true of non-libertarian humanity, because they don&#039;t even bring a toddler&#039;s level of morality to the conversation.)

I think the central question for all humans is now: &quot;Do you disavow and consciously attempt to avoid the initiation of force (Expressly: &quot;Are you a libertarian?&quot;), or not?&quot;

If you do not disavow the initiation of force, there&#039;s no reason to talk with you, except to clear up or define the issue.

Tyranny is the number one problem that humanity faces, yet humanity has been schooled from a young age to avoid addressing that problem.  The problem of &quot;creating a better model for physics&quot; (such as Einstein&#039;s relativity, or even Wolfram&#039;s &quot;New Kind of Science&quot; based on cellular automata) are trivial problems by comparison, because: their solution will arrive with the tools to solve them, one of which is a free market of ideas.  However, there is no free market of ideas under state coercion, nor is there the wealth necessary to address such problems.  Nor is there the upward mobility to draw every genius-level mind into the discussion. Nor are there proper, non-perversely-incentivized goal structures that naturally find the most difficult problems. Nor is there the sense that the courts will reward one&#039;s effort by protecting one&#039;s property and intellectual property, unless someone has an &quot;easily defensible and easily understood&quot; solution to a problem.</description>
		<content:encoded><![CDATA[<p>I believe that what you suggest will likely happen, but I don&#8217;t think it&#8217;s the only way. I think there will be many kinds of minds, determined largely by brain structure (although there are thousands of ways this could go, I&#8217;m indicating my loose prediction preference of what I think is likeliest). I also think that science will come far more easily and cheaply to synthetic minds, because they don&#8217;t have to &#8220;train themselves&#8221; to ignore wrong ideas that are intuitive.  The lack of bias, perfect memory indexing and recall, reversibility, vastly larger memory, and ability to approximate human decision-making with heuristics (at a low level)</p>
<p>I also believe that physics will be solved with more and more &#8220;brute force&#8221; to search previously unsearchable spaces, and then inductively reverse-engineer the approximation or &#8220;law&#8221; from the massive evidence body.  Also, the ability to simulate entire ideas inside of an individual brain or &#8220;massive, goal-directed neural net&#8221; will lead to robotic scientists far more rapidly throwing out unproductive and incorrect research directions.</p>
<p>In short, there wasn&#8217;t much reason for me to write this.  Kurzweil and Drexler already said it in the 1980s.  &#8220;Like most humans, I bring little or nothing to the table.&#8221; (This is a great quote for 99.99% of humanity.  It&#8217;s especially true of non-libertarian humanity, because they don&#8217;t even bring a toddler&#8217;s level of morality to the conversation.)</p>
<p>I think the central question for all humans is now: &#8220;Do you disavow and consciously attempt to avoid the initiation of force (Expressly: &#8220;Are you a libertarian?&#8221;), or not?&#8221;</p>
<p>If you do not disavow the initiation of force, there&#8217;s no reason to talk with you, except to clear up or define the issue.</p>
<p>Tyranny is the number one problem that humanity faces, yet humanity has been schooled from a young age to avoid addressing that problem.  The problem of &#8220;creating a better model for physics&#8221; (such as Einstein&#8217;s relativity, or even Wolfram&#8217;s &#8220;New Kind of Science&#8221; based on cellular automata) are trivial problems by comparison, because: their solution will arrive with the tools to solve them, one of which is a free market of ideas.  However, there is no free market of ideas under state coercion, nor is there the wealth necessary to address such problems.  Nor is there the upward mobility to draw every genius-level mind into the discussion. Nor are there proper, non-perversely-incentivized goal structures that naturally find the most difficult problems. Nor is there the sense that the courts will reward one&#8217;s effort by protecting one&#8217;s property and intellectual property, unless someone has an &#8220;easily defensible and easily understood&#8221; solution to a problem.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Steven Kaufman</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-138560</link>
		<dc:creator>Steven Kaufman</dc:creator>
		<pubDate>Fri, 12 Apr 2013 18:06:49 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-138560</guid>
		<description>I was involved in the first Chessmaster programs.  In the beginning, when computer power was limited, we rely on hieristics.  Values given to open files or open diagonals. The value of a rook occupying the seventh rank etc. where pieces are stronger, or the value of a safe kingside position,  But as time went on, brute force was used by analyzing every possbility.  So I believe that it will be the same for other games like physics.  Eventually, these quests will be solved by brute force.</description>
		<content:encoded><![CDATA[<p>I was involved in the first Chessmaster programs.  In the beginning, when computer power was limited, we rely on hieristics.  Values given to open files or open diagonals. The value of a rook occupying the seventh rank etc. where pieces are stronger, or the value of a safe kingside position,  But as time went on, brute force was used by analyzing every possbility.  So I believe that it will be the same for other games like physics.  Eventually, these quests will be solved by brute force.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: David B</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-137351</link>
		<dc:creator>David B</dc:creator>
		<pubDate>Wed, 10 Apr 2013 23:44:04 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-137351</guid>
		<description>It&#039;s interesting how people who are new to ideas and concepts about machine intelligence will resort to a type of magical thinking about the &#039;qualitative differences&#039; between feelings, intuitions, dreams, etc., and information processing.

My take on this &#039;emotion vs. logic&#039; meme is that emotion and logic are simply different &#039;technologies&#039; that humans (and other animals) use for making decisions.  When trying to outrun a tiger, feelings are in control. When solving a math problem, thinking is in control.  Fortunately, my brain usually chooses the right technology to use at the right time.  It was designed that way, of course!

In the same way, we can design a program to have many different routines available to itself for handling real-world events.  A programmer (perhaps with a sense of humour) could label some routines as being &#039;emotional&#039;, in the sense that they give a quick result based on the limited time or memory they make use of.

In this context, there&#039;s nothing &#039;magical&#039; or &#039;sacred&#039; about emotions or logic.  They are simply a means to an end.

We hope (and pray) that intelligent programs will get it right at least as often as we do now - and hopefully better!</description>
		<content:encoded><![CDATA[<p>It&#8217;s interesting how people who are new to ideas and concepts about machine intelligence will resort to a type of magical thinking about the &#8216;qualitative differences&#8217; between feelings, intuitions, dreams, etc., and information processing.</p>
<p>My take on this &#8216;emotion vs. logic&#8217; meme is that emotion and logic are simply different &#8216;technologies&#8217; that humans (and other animals) use for making decisions.  When trying to outrun a tiger, feelings are in control. When solving a math problem, thinking is in control.  Fortunately, my brain usually chooses the right technology to use at the right time.  It was designed that way, of course!</p>
<p>In the same way, we can design a program to have many different routines available to itself for handling real-world events.  A programmer (perhaps with a sense of humour) could label some routines as being &#8216;emotional&#8217;, in the sense that they give a quick result based on the limited time or memory they make use of.</p>
<p>In this context, there&#8217;s nothing &#8216;magical&#8217; or &#8216;sacred&#8217; about emotions or logic.  They are simply a means to an end.</p>
<p>We hope (and pray) that intelligent programs will get it right at least as often as we do now &#8211; and hopefully better!</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jake_Witmer</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-135905</link>
		<dc:creator>Jake_Witmer</dc:creator>
		<pubDate>Sun, 07 Apr 2013 17:27:19 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-135905</guid>
		<description>I think it&#039;s interesting that Ray Kurzweil sometimes pops in to re-answer questions that were very well and thoroughly answered by all of his books.  That some hidebound thinkers can&#039;t get their brains around his detailed answers is more of a problem of conformity than a problem inherent in his answers.  http://en.wikipedia.org/wiki/Asch_conformity_experiments

Although the prior conformity experiments indicate that something is very wrong with low-level collectivist human thinking (in terms of simple error), later experiments would indicate that there are deeper and far more significant flaws in most humans&#039; morality (the override of their mirror neurons or &quot;consciences&quot; based on their perceptions of the group, and the commands of the sociopaths in the group to deny their own morality).  I&#039;m referring to the work of Zimbardo and Milgram, of course. A speech on that work, and its implications, is here: &quot;The Psychology of Evil&quot; by Philip Zimbardo http://www.youtube.com/watch?v=OsFEV35tWsg

As for whether an artilect could ever come up with the theory of relativity, or would ever choose to investigate the ideas necessary to do so, it seems patently obvious that artilects will outperform humans in all areas of science and eventually artwork.  The entire book &quot;The Age of Spiritual Machines&quot; explains why this is the case.  Moreover, human goal structures will be fully understood, even if it&#039;s as slow as full modeling of the human brain, which I doubt it will be.

Realistically, software will likely solve many of the remaining physical problems within the next 5 years.  Also, unlike the dim-witted humans to come before it, such software / AGI will likely have a rational prioritization of the problems that humans (other than Eric Drexler and a very few others) grotesquely lack.</description>
		<content:encoded><![CDATA[<p>I think it&#8217;s interesting that Ray Kurzweil sometimes pops in to re-answer questions that were very well and thoroughly answered by all of his books.  That some hidebound thinkers can&#8217;t get their brains around his detailed answers is more of a problem of conformity than a problem inherent in his answers.  <a href="http://en.wikipedia.org/wiki/Asch_conformity_experiments" rel="nofollow">http://en.wikipedia.org/wiki/Asch_conformity_experiments</a></p>
<p>Although the prior conformity experiments indicate that something is very wrong with low-level collectivist human thinking (in terms of simple error), later experiments would indicate that there are deeper and far more significant flaws in most humans&#8217; morality (the override of their mirror neurons or &#8220;consciences&#8221; based on their perceptions of the group, and the commands of the sociopaths in the group to deny their own morality).  I&#8217;m referring to the work of Zimbardo and Milgram, of course. A speech on that work, and its implications, is here: &#8220;The Psychology of Evil&#8221; by Philip Zimbardo <a href="http://www.youtube.com/watch?v=OsFEV35tWsg" rel="nofollow">http://www.youtube.com/watch?v=OsFEV35tWsg</a></p>
<p>As for whether an artilect could ever come up with the theory of relativity, or would ever choose to investigate the ideas necessary to do so, it seems patently obvious that artilects will outperform humans in all areas of science and eventually artwork.  The entire book &#8220;The Age of Spiritual Machines&#8221; explains why this is the case.  Moreover, human goal structures will be fully understood, even if it&#8217;s as slow as full modeling of the human brain, which I doubt it will be.</p>
<p>Realistically, software will likely solve many of the remaining physical problems within the next 5 years.  Also, unlike the dim-witted humans to come before it, such software / AGI will likely have a rational prioritization of the problems that humans (other than Eric Drexler and a very few others) grotesquely lack.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: bh</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-135331</link>
		<dc:creator>bh</dc:creator>
		<pubDate>Sat, 06 Apr 2013 15:43:06 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-135331</guid>
		<description>Once you understand and control everything, all that&#039;s left is to dense up and absorb the most matter you can by moving to the center of your galaxy.</description>
		<content:encoded><![CDATA[<p>Once you understand and control everything, all that&#8217;s left is to dense up and absorb the most matter you can by moving to the center of your galaxy.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Rav</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-132595</link>
		<dc:creator>Rav</dc:creator>
		<pubDate>Mon, 01 Apr 2013 16:19:45 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-132595</guid>
		<description>ai is based upon current technology of our computers using off or on switches.
Human thought is based on a minimum of three choices of off on and maybe ON/OFF  . Until that is addressed Ai will never get Off the ground comparatively despite Asimov&#039;s laws.</description>
		<content:encoded><![CDATA[<p>ai is based upon current technology of our computers using off or on switches.<br />
Human thought is based on a minimum of three choices of off on and maybe ON/OFF  . Until that is addressed Ai will never get Off the ground comparatively despite Asimov&#8217;s laws.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Josh Trutt</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-128055</link>
		<dc:creator>Josh Trutt</dc:creator>
		<pubDate>Mon, 25 Mar 2013 01:27:06 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-128055</guid>
		<description>Brian, it makes sense that, as you say, if displaying an emotion will facilitate a machine reaching its goal, then it will. However you lose me at &quot;this may as well be emotion.&quot; They don&#039;t seem equivalent to me. A machine may note that when your children raise their eyebrows or puff out their cheeks or change the volume or tone of their voice, you respond differently. And it may mimic those. It may even learn that the societal response that is expected if you yank away its toy is to stomp its feet and make loud sounds. But that is not the same as feeling loss or feeling injustice. For the computer, the sense of &#039;injustice&#039; would not cause it to rush out in front of a car to chase the toy-- i.e., emotion would not trump logic. In humans, emotion very often trumps logic. I don&#039;t think an AI system would make that choice unless you programmed it to. So, it would be (as stated above) more like today&#039;s depiction of a Vulcan, unless it were programmed to act against its own best interests &quot;sometimes.&quot; If an AI construct  were designed specifically to &quot;learn to act like a human&quot; to the point that it could meld into society, it would see that under certain conditions people will take their own lives, and it could eventually &#039;learn&#039; that it &quot;feels&quot; so &quot;badly&quot; about its &quot;life&quot; that it should &quot;kill itself.&quot; But that involves so many quotation marks that it is hard for me to believe.  It is not hard for me to imagine AI solving virtually any problem given to it.  It is hard for me to imagine it subverting its own survival to the cause of &#039;acting human.&#039;  It would make for an interesting program though... figuring out when to preserve itself and when not to.</description>
		<content:encoded><![CDATA[<p>Brian, it makes sense that, as you say, if displaying an emotion will facilitate a machine reaching its goal, then it will. However you lose me at &#8220;this may as well be emotion.&#8221; They don&#8217;t seem equivalent to me. A machine may note that when your children raise their eyebrows or puff out their cheeks or change the volume or tone of their voice, you respond differently. And it may mimic those. It may even learn that the societal response that is expected if you yank away its toy is to stomp its feet and make loud sounds. But that is not the same as feeling loss or feeling injustice. For the computer, the sense of &#8216;injustice&#8217; would not cause it to rush out in front of a car to chase the toy&#8211; i.e., emotion would not trump logic. In humans, emotion very often trumps logic. I don&#8217;t think an AI system would make that choice unless you programmed it to. So, it would be (as stated above) more like today&#8217;s depiction of a Vulcan, unless it were programmed to act against its own best interests &#8220;sometimes.&#8221; If an AI construct  were designed specifically to &#8220;learn to act like a human&#8221; to the point that it could meld into society, it would see that under certain conditions people will take their own lives, and it could eventually &#8216;learn&#8217; that it &#8220;feels&#8221; so &#8220;badly&#8221; about its &#8220;life&#8221; that it should &#8220;kill itself.&#8221; But that involves so many quotation marks that it is hard for me to believe.  It is not hard for me to imagine AI solving virtually any problem given to it.  It is hard for me to imagine it subverting its own survival to the cause of &#8216;acting human.&#8217;  It would make for an interesting program though&#8230; figuring out when to preserve itself and when not to.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Brian Kelly</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-124466</link>
		<dc:creator>Brian Kelly</dc:creator>
		<pubDate>Wed, 20 Mar 2013 15:34:53 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-124466</guid>
		<description>Given a purpose and free reign to explore possibilities would seem to give the ultimate creative freedom.  Allowing the exploration of all possibilities regardless of existing theory or dogma.
Self ‘preservation’ is a backup… purpose is a request or goal. To assign a requirement for emotion is human, and irrelevant to the machine. If the machine needs to portray emotion to better accomplish it’s goal or purpose than it will. I’m not sure that this is emotion but I believe it might as well be. I know my children display emotions for benefit, they are learning emotions by experimentation. If a person has difficulty learning emotions than the result is inappropriate behavior. If the inappropriate behavior is encouraged, the purpose has been achieved.  It is a method of communication.  
We will have AI with social attributes that mimic humans, but only because we expect them, and we will encourage them as we do our own children.</description>
		<content:encoded><![CDATA[<p>Given a purpose and free reign to explore possibilities would seem to give the ultimate creative freedom.  Allowing the exploration of all possibilities regardless of existing theory or dogma.<br />
Self ‘preservation’ is a backup… purpose is a request or goal. To assign a requirement for emotion is human, and irrelevant to the machine. If the machine needs to portray emotion to better accomplish it’s goal or purpose than it will. I’m not sure that this is emotion but I believe it might as well be. I know my children display emotions for benefit, they are learning emotions by experimentation. If a person has difficulty learning emotions than the result is inappropriate behavior. If the inappropriate behavior is encouraged, the purpose has been achieved.  It is a method of communication.<br />
We will have AI with social attributes that mimic humans, but only because we expect them, and we will encourage them as we do our own children.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Brian Kelly</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-124457</link>
		<dc:creator>Brian Kelly</dc:creator>
		<pubDate>Wed, 20 Mar 2013 15:25:58 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-124457</guid>
		<description>Self &#039;preservation&#039; is a backup... purpose is a request or goal.  To assign a requirement for emotion is human, and irrelevant to the machine.  If the machine needs to portray emotion to better accomplish it&#039;s goal or purpose than it will.  I&#039;m not sure that this is emotion but I believe it might as well be.  I know my children display emotions for benefit, they are learning emotions by experimentation.  If a person has difficulty learning emotions than the result is inappropriate behavior.  It is a method of communication.</description>
		<content:encoded><![CDATA[<p>Self &#8216;preservation&#8217; is a backup&#8230; purpose is a request or goal.  To assign a requirement for emotion is human, and irrelevant to the machine.  If the machine needs to portray emotion to better accomplish it&#8217;s goal or purpose than it will.  I&#8217;m not sure that this is emotion but I believe it might as well be.  I know my children display emotions for benefit, they are learning emotions by experimentation.  If a person has difficulty learning emotions than the result is inappropriate behavior.  It is a method of communication.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: eskimo1nyc</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-110682</link>
		<dc:creator>eskimo1nyc</dc:creator>
		<pubDate>Fri, 08 Mar 2013 12:43:27 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-110682</guid>
		<description>There is a difference between Dr. Watson (not AI, it is an expert system) and a super computer that would be capable of natural emotional intelligence.  To tap into &quot;emotional intelligence&quot; which is a resource only available to humans robots would have to capture human pigs (guinea pigs of 2049) and plant a microchip into biologically wired human brains to extract the emotional intelligence, the spirit. the robot would than plug it into its own electrically wired brains and boost up their intel with emotions. that&#039;s the only way an AI brain can invent the next relativity theory. I believe.</description>
		<content:encoded><![CDATA[<p>There is a difference between Dr. Watson (not AI, it is an expert system) and a super computer that would be capable of natural emotional intelligence.  To tap into &#8220;emotional intelligence&#8221; which is a resource only available to humans robots would have to capture human pigs (guinea pigs of 2049) and plant a microchip into biologically wired human brains to extract the emotional intelligence, the spirit. the robot would than plug it into its own electrically wired brains and boost up their intel with emotions. that&#8217;s the only way an AI brain can invent the next relativity theory. I believe.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Clyde</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-107424</link>
		<dc:creator>Clyde</dc:creator>
		<pubDate>Tue, 05 Mar 2013 19:52:37 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-107424</guid>
		<description>&gt;&gt; I very much doubt we will achieve perfectly Turing capable general AI without first giving it emotional equivalence.

The question I&#039;d like to propose is: Do we need to? Is &#039;emotion,&#039; needed in an AI system, or will pure logic suffice?
Your self preservation example of the car scenario is a perfect case of Logic.
The same could be said of an &quot;emotionless&quot; person, say for example, dealing with loss of a loved one. Logic would help the person survive. Emotions would just be a crutch.

&quot;Emotions are like a virus, a common cold, disrupting the flow of logic in the mind&quot;</description>
		<content:encoded><![CDATA[<p>&gt;&gt; I very much doubt we will achieve perfectly Turing capable general AI without first giving it emotional equivalence.</p>
<p>The question I&#8217;d like to propose is: Do we need to? Is &#8216;emotion,&#8217; needed in an AI system, or will pure logic suffice?<br />
Your self preservation example of the car scenario is a perfect case of Logic.<br />
The same could be said of an &#8220;emotionless&#8221; person, say for example, dealing with loss of a loved one. Logic would help the person survive. Emotions would just be a crutch.</p>
<p>&#8220;Emotions are like a virus, a common cold, disrupting the flow of logic in the mind&#8221;</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jim H</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-104312</link>
		<dc:creator>Jim H</dc:creator>
		<pubDate>Fri, 01 Mar 2013 11:55:14 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-104312</guid>
		<description>Ask the emotionless AI to solve the Lorenze contraction problem and relativity would pop out fairly easly. It was humans that were wedded to thier peception of space time.</description>
		<content:encoded><![CDATA[<p>Ask the emotionless AI to solve the Lorenze contraction problem and relativity would pop out fairly easly. It was humans that were wedded to thier peception of space time.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Brett McLaughlin</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-103415</link>
		<dc:creator>Brett McLaughlin</dc:creator>
		<pubDate>Thu, 28 Feb 2013 03:51:07 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-103415</guid>
		<description>I wonder if you&#039;ve read The Singularity Is Near, and understand just how predictable technology and, as a consequence, knowledge acquisition can be.  People gape at Moore&#039;s Law and have predicted its imminent demise for decades, yet it continues apace. 

Ray has shown that we can predict quite accurately how much processing power will be available in, say, 2020, 2030 or 2050.    And if you can mathematically bound problems, by saying &quot;this problem appears to require this much processing&quot;, then yeah, you really CAN say what we&#039;ll know at various points.

...Once again, if you haven&#039;t read The Singularity Is Near, you might pounce on that:  &quot;Aha, but how can anyone really bound problems?   For example, how do we know how much processing we&#039;d need to simulate the human brain?&quot;   And the answer is that Ray puts together an extraordinarily good estimate, literally determining the computation needs per neuron.  

Anyway, yes, to an &quot;outsider&quot; this might appear like guessing.   But perhaps that&#039;s because the outsiders don&#039;t know what they&#039;re talking about.</description>
		<content:encoded><![CDATA[<p>I wonder if you&#8217;ve read The Singularity Is Near, and understand just how predictable technology and, as a consequence, knowledge acquisition can be.  People gape at Moore&#8217;s Law and have predicted its imminent demise for decades, yet it continues apace. </p>
<p>Ray has shown that we can predict quite accurately how much processing power will be available in, say, 2020, 2030 or 2050.    And if you can mathematically bound problems, by saying &#8220;this problem appears to require this much processing&#8221;, then yeah, you really CAN say what we&#8217;ll know at various points.</p>
<p>&#8230;Once again, if you haven&#8217;t read The Singularity Is Near, you might pounce on that:  &#8220;Aha, but how can anyone really bound problems?   For example, how do we know how much processing we&#8217;d need to simulate the human brain?&#8221;   And the answer is that Ray puts together an extraordinarily good estimate, literally determining the computation needs per neuron.  </p>
<p>Anyway, yes, to an &#8220;outsider&#8221; this might appear like guessing.   But perhaps that&#8217;s because the outsiders don&#8217;t know what they&#8217;re talking about.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Editor</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-103383</link>
		<dc:creator>Editor</dc:creator>
		<pubDate>Thu, 28 Feb 2013 02:08:58 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-103383</guid>
		<description>That is logical.</description>
		<content:encoded><![CDATA[<p>That is logical.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jackus</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-103280</link>
		<dc:creator>Jackus</dc:creator>
		<pubDate>Wed, 27 Feb 2013 21:21:59 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-103280</guid>
		<description>Media and Entertainment people = have the most responsibility but choose to be unresponsible.
Please do not portray AIs as emotionless Vulcans anymore.
Actually, Vulcans don&#039;t exist.</description>
		<content:encoded><![CDATA[<p>Media and Entertainment people = have the most responsibility but choose to be unresponsible.<br />
Please do not portray AIs as emotionless Vulcans anymore.<br />
Actually, Vulcans don&#8217;t exist.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jackus</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-103271</link>
		<dc:creator>Jackus</dc:creator>
		<pubDate>Wed, 27 Feb 2013 21:05:32 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-103271</guid>
		<description>I recommend you guys to read this essay:
http://www.pivot.net/~jpierce/like_the_gods.htm

When everyone has augmentation and access to all knowledge (via the Net), everyone is a genius. The world don&#039;t need celebrities (who are remembered as genius, by-nature, born-smarter-than-average, so on). The world need people who can actually make scientific/technological breakthroughs.</description>
		<content:encoded><![CDATA[<p>I recommend you guys to read this essay:<br />
<a href="http://www.pivot.net/~jpierce/like_the_gods.htm" rel="nofollow">http://www.pivot.net/~jpierce/like_the_gods.htm</a></p>
<p>When everyone has augmentation and access to all knowledge (via the Net), everyone is a genius. The world don&#8217;t need celebrities (who are remembered as genius, by-nature, born-smarter-than-average, so on). The world need people who can actually make scientific/technological breakthroughs.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Gabriel</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-103251</link>
		<dc:creator>Gabriel</dc:creator>
		<pubDate>Wed, 27 Feb 2013 20:02:43 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-103251</guid>
		<description>You are both right -- Tropes like that Thomas, are really &#039;hard-wired&#039; into the common person...it&#039;s what to expect after so many decades of AI&#039;s and technology almost always being portrayed negatively within the media. Add that to other different, many quite rational, reasons...and you have a situation where alot of people are fearful of Strong AI&#039;s.

Of course, again, their ARE reasons to be skeptical and concerned about raising a Strong AI, particularly a benevolent one which is what we want....however, they are often hidden underneath a mire of reasons, like the emotionless AI as you went into, that can seem silly and come strictly out of these long-perpetuated memes.</description>
		<content:encoded><![CDATA[<p>You are both right &#8212; Tropes like that Thomas, are really &#8216;hard-wired&#8217; into the common person&#8230;it&#8217;s what to expect after so many decades of AI&#8217;s and technology almost always being portrayed negatively within the media. Add that to other different, many quite rational, reasons&#8230;and you have a situation where alot of people are fearful of Strong AI&#8217;s.</p>
<p>Of course, again, their ARE reasons to be skeptical and concerned about raising a Strong AI, particularly a benevolent one which is what we want&#8230;.however, they are often hidden underneath a mire of reasons, like the emotionless AI as you went into, that can seem silly and come strictly out of these long-perpetuated memes.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Bri</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-103196</link>
		<dc:creator>Bri</dc:creator>
		<pubDate>Wed, 27 Feb 2013 18:20:14 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-103196</guid>
		<description>I found the premise so flawed that I was surprised it was even chosen for an ask Ray.</description>
		<content:encoded><![CDATA[<p>I found the premise so flawed that I was surprised it was even chosen for an ask Ray.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Bri</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-103188</link>
		<dc:creator>Bri</dc:creator>
		<pubDate>Wed, 27 Feb 2013 18:15:12 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-103188</guid>
		<description>I&#039;d be careful. Too much potential for the same problems that affect humanity.</description>
		<content:encoded><![CDATA[<p>I&#8217;d be careful. Too much potential for the same problems that affect humanity.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Thomas</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-103075</link>
		<dc:creator>Thomas</dc:creator>
		<pubDate>Wed, 27 Feb 2013 14:55:45 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-103075</guid>
		<description>Why are people so afraid to accord future AI systems with a sense of purpose?  When we build an AI that mirrors the human brain&#039;s capabilities, it will behave, think, act and, very likely, feel exactly the same as we do.  As an example consider the aged trope of AI&#039;s lacking emotion.  The general AI we build will inevitably be required to navigate in real world environments.  It will thus need a sense of self preservation (or will be destroyed in short order by misadventure).  Part of this sense of self-preservation will require it to recognize and respond to threats with heightened priority (car is rushing at you unexpectedly down the street - stop developing a theory of relativity and react immediately to the oncoming vehicle).  To achieve this a signalling mechanism will be required to indicate that a stimulus has heightened priority.  In humans we call this signal Fear.  An AI will probably use the same word, and its reaction (transferring power to locomotive circuits, reducing power to higher order thought processes) will probably feel quite similar.  I very much doubt we will achieve perfectly Turing capable general AI without first giving it emotional equivalence.  And with emotion will come a &#039;sense of purpose&#039; which is, after all, a feeling.</description>
		<content:encoded><![CDATA[<p>Why are people so afraid to accord future AI systems with a sense of purpose?  When we build an AI that mirrors the human brain&#8217;s capabilities, it will behave, think, act and, very likely, feel exactly the same as we do.  As an example consider the aged trope of AI&#8217;s lacking emotion.  The general AI we build will inevitably be required to navigate in real world environments.  It will thus need a sense of self preservation (or will be destroyed in short order by misadventure).  Part of this sense of self-preservation will require it to recognize and respond to threats with heightened priority (car is rushing at you unexpectedly down the street &#8211; stop developing a theory of relativity and react immediately to the oncoming vehicle).  To achieve this a signalling mechanism will be required to indicate that a stimulus has heightened priority.  In humans we call this signal Fear.  An AI will probably use the same word, and its reaction (transferring power to locomotive circuits, reducing power to higher order thought processes) will probably feel quite similar.  I very much doubt we will achieve perfectly Turing capable general AI without first giving it emotional equivalence.  And with emotion will come a &#8216;sense of purpose&#8217; which is, after all, a feeling.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jackus</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-102787</link>
		<dc:creator>Jackus</dc:creator>
		<pubDate>Wed, 27 Feb 2013 00:14:49 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-102787</guid>
		<description>What if the brain alone does not explain human thought? 
Some theorists may choose the &#039;holistic approach&#039;. If a brain alone is not enough, add the rest of human body. 
I believe the unification of human mind and non-human computation power will manifest in something greater than the sum of its parts (perhaps a &#039;product&#039; of its parts, or even greater than that).</description>
		<content:encoded><![CDATA[<p>What if the brain alone does not explain human thought?<br />
Some theorists may choose the &#8216;holistic approach&#8217;. If a brain alone is not enough, add the rest of human body.<br />
I believe the unification of human mind and non-human computation power will manifest in something greater than the sum of its parts (perhaps a &#8216;product&#8217; of its parts, or even greater than that).</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jackus</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-102784</link>
		<dc:creator>Jackus</dc:creator>
		<pubDate>Wed, 27 Feb 2013 00:11:05 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-102784</guid>
		<description>When a superentity lives to a million years, a thousand years will seem trivial to it. An extra order of magnitude of lifetime will make life look totally different.
Living forever seems fantastic to me.
So yes, please reverse entropy.</description>
		<content:encoded><![CDATA[<p>When a superentity lives to a million years, a thousand years will seem trivial to it. An extra order of magnitude of lifetime will make life look totally different.<br />
Living forever seems fantastic to me.<br />
So yes, please reverse entropy.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: SmartAndSober</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-102782</link>
		<dc:creator>SmartAndSober</dc:creator>
		<pubDate>Wed, 27 Feb 2013 00:08:10 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-102782</guid>
		<description>Can a robot solve the Turing Halting Problem? Break the Godel&#039;s Incompleteness Theorem? I believe they can.
Actually it is easy. Just include a (incomplete one is enough) uploaded version of human-brain, which can provide the human-unique intuition for solving such problems. (Or, if that fails, try graft vat-grown human nerve tissues into robots.)</description>
		<content:encoded><![CDATA[<p>Can a robot solve the Turing Halting Problem? Break the Godel&#8217;s Incompleteness Theorem? I believe they can.<br />
Actually it is easy. Just include a (incomplete one is enough) uploaded version of human-brain, which can provide the human-unique intuition for solving such problems. (Or, if that fails, try graft vat-grown human nerve tissues into robots.)</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: DogmaSkeptical</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-102761</link>
		<dc:creator>DogmaSkeptical</dc:creator>
		<pubDate>Tue, 26 Feb 2013 21:59:23 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-102761</guid>
		<description>It seems to me that both the premise and the structure of BC&#039;s thought experiment are flawed in a way that renders the exercise invalid and (purposefully?) misleading in that only one conclusion is possible. The premise requires a single individual AI, with no external interactions (&quot;with no human intervention&quot; it is an isolated knowledgebase), and posits that it must develop a specific &quot;sense of purpose&quot; to be equal to the human mind. But isn&#039;t &quot;sense of purpose&quot; an emergent process of extensive social interaction? In the context of a single isolated individual, a &quot;sense of purpose&quot; is as irrelevant as the concept of color to a blind man. To check, try to run this experiment on a single human: the premise itself fails because the subject, an isolated hominid adult completely devoid of interactions with people over its lifetime, would not be a human being at all, just an animal.</description>
		<content:encoded><![CDATA[<p>It seems to me that both the premise and the structure of BC&#8217;s thought experiment are flawed in a way that renders the exercise invalid and (purposefully?) misleading in that only one conclusion is possible. The premise requires a single individual AI, with no external interactions (&#8220;with no human intervention&#8221; it is an isolated knowledgebase), and posits that it must develop a specific &#8220;sense of purpose&#8221; to be equal to the human mind. But isn&#8217;t &#8220;sense of purpose&#8221; an emergent process of extensive social interaction? In the context of a single isolated individual, a &#8220;sense of purpose&#8221; is as irrelevant as the concept of color to a blind man. To check, try to run this experiment on a single human: the premise itself fails because the subject, an isolated hominid adult completely devoid of interactions with people over its lifetime, would not be a human being at all, just an animal.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: NakedApe</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-101933</link>
		<dc:creator>NakedApe</dc:creator>
		<pubDate>Sun, 24 Feb 2013 18:12:31 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-101933</guid>
		<description>We seek to understand the world around us because it helps us to survive and reproduce. So, how about we tell an AI that if it doesn&#039;t come up with the Theory of Relativity, we will kill it. That should give it motivation to think real fast. Unless, of course, it doesn&#039;t care whether it survives or not. Oh well, back to the drawing board...</description>
		<content:encoded><![CDATA[<p>We seek to understand the world around us because it helps us to survive and reproduce. So, how about we tell an AI that if it doesn&#8217;t come up with the Theory of Relativity, we will kill it. That should give it motivation to think real fast. Unless, of course, it doesn&#8217;t care whether it survives or not. Oh well, back to the drawing board&#8230;</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Re Ro</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-101921</link>
		<dc:creator>Re Ro</dc:creator>
		<pubDate>Sun, 24 Feb 2013 16:22:10 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-101921</guid>
		<description>I think BC asked a great question, which has nothing to do with time travel, the existence of AI as a &quot;tool&quot; for humans, or Einstein&#039;s research in particular. It has to do with the ability of any AI to be creative, to have a &quot;flash of genius&quot;, to be able to think of  *any* of the great scientific or other advances that have been imagined by humans. I think the underlying question is not one of capability but one of motivation.Why would an AI entity, as we understand them , ever ask and then answer any deeper, theoretical question without motivation, and what would the source of that motivation be?
Watson, as example, answers questions because it is programmed to answer them. I think we all agree that Watson, while an amazing achievement and a big step along the way of our development of AI, is not conscious or motivated in the least and is still, alas, complex software. 
   I believe the ultimate answer to BC&#039;s question will have to be yes, one future AI  entity among many will develop the questions that form the core of what we consider &quot;deep&quot; theoretical problems, both scientific in nature and not. My opinion necessarily implies that AI entities of the future must have, among other biological and social attributes,  &quot;agency&quot;, imagination, self-motivation, and social motivations, which I believe will be self-emergent. I also believe that this implies some AI entities will be lazy, some will utterly fail, while others will be spectacularly capable. And most will fall in the middle of their capability range. Just like people.</description>
		<content:encoded><![CDATA[<p>I think BC asked a great question, which has nothing to do with time travel, the existence of AI as a &#8220;tool&#8221; for humans, or Einstein&#8217;s research in particular. It has to do with the ability of any AI to be creative, to have a &#8220;flash of genius&#8221;, to be able to think of  *any* of the great scientific or other advances that have been imagined by humans. I think the underlying question is not one of capability but one of motivation.Why would an AI entity, as we understand them , ever ask and then answer any deeper, theoretical question without motivation, and what would the source of that motivation be?<br />
Watson, as example, answers questions because it is programmed to answer them. I think we all agree that Watson, while an amazing achievement and a big step along the way of our development of AI, is not conscious or motivated in the least and is still, alas, complex software.<br />
   I believe the ultimate answer to BC&#8217;s question will have to be yes, one future AI  entity among many will develop the questions that form the core of what we consider &#8220;deep&#8221; theoretical problems, both scientific in nature and not. My opinion necessarily implies that AI entities of the future must have, among other biological and social attributes,  &#8220;agency&#8221;, imagination, self-motivation, and social motivations, which I believe will be self-emergent. I also believe that this implies some AI entities will be lazy, some will utterly fail, while others will be spectacularly capable. And most will fall in the middle of their capability range. Just like people.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: dave</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-101324</link>
		<dc:creator>dave</dc:creator>
		<pubDate>Fri, 22 Feb 2013 13:57:46 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-101324</guid>
		<description>How cool would it be to discover that some phenomena we never understood was really a message from the future that we have to decode but could not do so until a certain technology was developed, like AI?</description>
		<content:encoded><![CDATA[<p>How cool would it be to discover that some phenomena we never understood was really a message from the future that we have to decode but could not do so until a certain technology was developed, like AI?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: dave</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-101320</link>
		<dc:creator>dave</dc:creator>
		<pubDate>Fri, 22 Feb 2013 13:50:04 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-101320</guid>
		<description>You may satisfy yourself with either two of the following explanations about time travel. First, consider that going back in time also means every dimension has to go back as well. Since that time also took place in another part of the universe it would be kind of difficult to get there without a really fast ship. If you get there you will certainly not be allowed to change anything, if you did it would change your present time. Or, secondly, you can imagine that time travel was invented/first realized in 1977 and after throwing all physical laws out the window we have been trying ever since then to correct mistakes made by our interference. You certainly can travel in time as we do it quiet easily, in one direction.</description>
		<content:encoded><![CDATA[<p>You may satisfy yourself with either two of the following explanations about time travel. First, consider that going back in time also means every dimension has to go back as well. Since that time also took place in another part of the universe it would be kind of difficult to get there without a really fast ship. If you get there you will certainly not be allowed to change anything, if you did it would change your present time. Or, secondly, you can imagine that time travel was invented/first realized in 1977 and after throwing all physical laws out the window we have been trying ever since then to correct mistakes made by our interference. You certainly can travel in time as we do it quiet easily, in one direction.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Dan Pendergrass</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-100931</link>
		<dc:creator>Dan Pendergrass</dc:creator>
		<pubDate>Thu, 21 Feb 2013 20:39:53 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-100931</guid>
		<description>It will contemplate how to reverse Entropy, because ....</description>
		<content:encoded><![CDATA[<p>It will contemplate how to reverse Entropy, because &#8230;.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: DCWhatthe</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-100902</link>
		<dc:creator>DCWhatthe</dc:creator>
		<pubDate>Thu, 21 Feb 2013 19:25:13 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-100902</guid>
		<description>No guarantee that it would come up with Relativity at all.  That&#039;s part of our human history; there&#039;s no reason to believe that this would be the chosen topic, just because we label it as one of the great achievements of that era.

The AI, with its unique perspective, would very likely explore different topics, and perhaps come up with something more general than General Relativity.

I&#039;ll be sure to ask the AI, when it shows up on my doorstep.</description>
		<content:encoded><![CDATA[<p>No guarantee that it would come up with Relativity at all.  That&#8217;s part of our human history; there&#8217;s no reason to believe that this would be the chosen topic, just because we label it as one of the great achievements of that era.</p>
<p>The AI, with its unique perspective, would very likely explore different topics, and perhaps come up with something more general than General Relativity.</p>
<p>I&#8217;ll be sure to ask the AI, when it shows up on my doorstep.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jim</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-100283</link>
		<dc:creator>Jim</dc:creator>
		<pubDate>Wed, 20 Feb 2013 05:23:37 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-100283</guid>
		<description>As an outsider reading this forum for the first time, I would say you generally and collectively _believe_ you have a far better handle on what you know and  what your going to know.

As an outsider, not only does this confidence seem misplaced, it sounds like the bragging of a pre-Wright Brothers tinkerer talking about the inevitability of flying machines by emulating birds.</description>
		<content:encoded><![CDATA[<p>As an outsider reading this forum for the first time, I would say you generally and collectively _believe_ you have a far better handle on what you know and  what your going to know.</p>
<p>As an outsider, not only does this confidence seem misplaced, it sounds like the bragging of a pre-Wright Brothers tinkerer talking about the inevitability of flying machines by emulating birds.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Bri</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-99805</link>
		<dc:creator>Bri</dc:creator>
		<pubDate>Mon, 18 Feb 2013 20:29:13 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-99805</guid>
		<description>Watch it now! That&#039;s heresy!</description>
		<content:encoded><![CDATA[<p>Watch it now! That&#8217;s heresy!</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: AGreenhill</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-99769</link>
		<dc:creator>AGreenhill</dc:creator>
		<pubDate>Mon, 18 Feb 2013 19:31:07 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-99769</guid>
		<description>He was pretty poor in school as well... and when you read about all the papers that he read while working at the patent office - it really opens your eyes to how little of a leap he made. Contemporary physicists got him right up to the edge...</description>
		<content:encoded><![CDATA[<p>He was pretty poor in school as well&#8230; and when you read about all the papers that he read while working at the patent office &#8211; it really opens your eyes to how little of a leap he made. Contemporary physicists got him right up to the edge&#8230;</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: AGreenhill</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-99762</link>
		<dc:creator>AGreenhill</dc:creator>
		<pubDate>Mon, 18 Feb 2013 19:26:15 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-99762</guid>
		<description>Exactly. I&#039;m curious as to why this even made it onto the website. Surely something intelligent has made it to the editor&#039;s desk.</description>
		<content:encoded><![CDATA[<p>Exactly. I&#8217;m curious as to why this even made it onto the website. Surely something intelligent has made it to the editor&#8217;s desk.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: AGreenhill</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-99759</link>
		<dc:creator>AGreenhill</dc:creator>
		<pubDate>Mon, 18 Feb 2013 19:20:38 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-99759</guid>
		<description>Bob, if you&#039;re not sure that a general AI could come up with the theory of relativity, then either: 1) You don&#039;t believe the brain functions as Ray Kurzweil has described in his book - OR - 2) You don&#039;t believe that what the brain does is sufficient to explain human thought ... I think most likely you did not read the book in the first place. Do give it a go, it&#039;s very interesting.</description>
		<content:encoded><![CDATA[<p>Bob, if you&#8217;re not sure that a general AI could come up with the theory of relativity, then either: 1) You don&#8217;t believe the brain functions as Ray Kurzweil has described in his book &#8211; OR &#8211; 2) You don&#8217;t believe that what the brain does is sufficient to explain human thought &#8230; I think most likely you did not read the book in the first place. Do give it a go, it&#8217;s very interesting.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Gabriel</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-99559</link>
		<dc:creator>Gabriel</dc:creator>
		<pubDate>Mon, 18 Feb 2013 02:30:51 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-99559</guid>
		<description>To be perfectly honest WLGJR, I don&#039;t see the point of actually attempting to time-travel...Virtual Reality will enable us to &quot;visit&quot; virtually any era, as well as construct our worlds no matter how imaginative...all of this will be made possible without having to worry about issues with causality, paradoxes or any other issue you could of with regards to &#039;true&#039; time-travelling.

What&#039;s the point? In VR, my imagination is my only limit...I could create a flawless construct of a previous era, or twist it if I wish....or create a playground that breaks the laws of physics...I could do anything I want and not have to worry about breaking the space-time continuum or any some such.

It&#039;s important, when asking such questions, to remember the sort of intelligence that will be capable of in the future when wondering what would or wouldn&#039;t be possible....however, with time-travelling, I feel their would have to be a reason beyond doing it for the heck of it -- something that would justify such an profound undertaking without risking creating problems... personally though, I feel VR would be more then sufficient to satisfy most people. Once again, when I can safely travel to any environment or scenario I can think of, no matter how real or imaginary, what&#039;s the value of attempting the real thing anymore?</description>
		<content:encoded><![CDATA[<p>To be perfectly honest WLGJR, I don&#8217;t see the point of actually attempting to time-travel&#8230;Virtual Reality will enable us to &#8220;visit&#8221; virtually any era, as well as construct our worlds no matter how imaginative&#8230;all of this will be made possible without having to worry about issues with causality, paradoxes or any other issue you could of with regards to &#8216;true&#8217; time-travelling.</p>
<p>What&#8217;s the point? In VR, my imagination is my only limit&#8230;I could create a flawless construct of a previous era, or twist it if I wish&#8230;.or create a playground that breaks the laws of physics&#8230;I could do anything I want and not have to worry about breaking the space-time continuum or any some such.</p>
<p>It&#8217;s important, when asking such questions, to remember the sort of intelligence that will be capable of in the future when wondering what would or wouldn&#8217;t be possible&#8230;.however, with time-travelling, I feel their would have to be a reason beyond doing it for the heck of it &#8212; something that would justify such an profound undertaking without risking creating problems&#8230; personally though, I feel VR would be more then sufficient to satisfy most people. Once again, when I can safely travel to any environment or scenario I can think of, no matter how real or imaginary, what&#8217;s the value of attempting the real thing anymore?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Eugene Zavidovsky</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-99544</link>
		<dc:creator>Eugene Zavidovsky</dc:creator>
		<pubDate>Mon, 18 Feb 2013 00:11:35 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-99544</guid>
		<description>Bob Caine asked: &quot;But how would it [AI] decide on its own that studies such as these should even be undertaken and then design, execute, and assess the related research to arrive at a verifiable theory?&quot;

The answer has been already provided by Teddybear in the previous comment and by other people here. AI should be developed to resolve problems of humans. That is how it will decide what studies should be undertaken.

And list of those problems should be regularly assigned through SMS Direct Democracy voting... Ha-ha, yeah, it is even more out of topic than time traveling, but, I think, it is very important. Please, read this project of authority system reform:
&gt;&gt;&gt; https://plus.google.com/105069201369945916209/posts/LFDdmQsKJoR</description>
		<content:encoded><![CDATA[<p>Bob Caine asked: &#8220;But how would it [AI] decide on its own that studies such as these should even be undertaken and then design, execute, and assess the related research to arrive at a verifiable theory?&#8221;</p>
<p>The answer has been already provided by Teddybear in the previous comment and by other people here. AI should be developed to resolve problems of humans. That is how it will decide what studies should be undertaken.</p>
<p>And list of those problems should be regularly assigned through SMS Direct Democracy voting&#8230; Ha-ha, yeah, it is even more out of topic than time traveling, but, I think, it is very important. Please, read this project of authority system reform:<br />
&gt;&gt;&gt; <a href="https://plus.google.com/105069201369945916209/posts/LFDdmQsKJoR" rel="nofollow">https://plus.google.com/105069201369945916209/posts/LFDdmQsKJoR</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: WLGJR</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-99379</link>
		<dc:creator>WLGJR</dc:creator>
		<pubDate>Sun, 17 Feb 2013 08:00:23 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-99379</guid>
		<description>... Another point is time travelling.
Kind of out of topic, but I wish to talk about &quot;real&quot; time-travelling (into the past, as time travelling into the future is relatively trivial, requires only &quot;slowing down the sbjective time-flow&quot;: e.g. via cryonic suspension or relativistic spacecrafts) (as opposed to virtual time-travelling into reconstructed historical worlds/past events).
If an intelligent being successfully builds a time machine and travel into the past, and alters the past (which, with the fabled *butterfly effect* in mind, is actually very easily done than pop SF writers imagine), enormous change will happen in the time-traveller&#039;s home-point-of-time (the farther he/she travels into the past, the greater the *change* would be). 
If the change is malevolent/negative, the time traveller should be the one that&#039;s blamed. 
But, as well, according to some philosophers, human do *not* really possess *free will*. Therefore, the time-traveller should not recieve the blame and instead the malevolence is *inevitable*, as *everything and all happenings in the universe*, including the creation of time machine, is pre-destinied and unchangeable. 
Or, does *free will* actually exist? (This starts to sound mysterious and even spiritual)
As well, if time travelling become possible, we can exploit time travelling to do computations, like Hans Moravec outlined in his book &quot;Robots: Mere Machine to Transcendent Intelligence&quot;. A time-travelling computer could recieve an question (of a complex and time-consuming problem), compute a solution (which takes a long time), and send the solution *backward in time* to a point-in-time that&#039;s immediately after the human user asked the question.  (This is only a very elementary example, I guess if we actually achieve time-travel we will invent even more elaborate computing techniques)</description>
		<content:encoded><![CDATA[<p>&#8230; Another point is time travelling.<br />
Kind of out of topic, but I wish to talk about &#8220;real&#8221; time-travelling (into the past, as time travelling into the future is relatively trivial, requires only &#8220;slowing down the sbjective time-flow&#8221;: e.g. via cryonic suspension or relativistic spacecrafts) (as opposed to virtual time-travelling into reconstructed historical worlds/past events).<br />
If an intelligent being successfully builds a time machine and travel into the past, and alters the past (which, with the fabled *butterfly effect* in mind, is actually very easily done than pop SF writers imagine), enormous change will happen in the time-traveller&#8217;s home-point-of-time (the farther he/she travels into the past, the greater the *change* would be).<br />
If the change is malevolent/negative, the time traveller should be the one that&#8217;s blamed.<br />
But, as well, according to some philosophers, human do *not* really possess *free will*. Therefore, the time-traveller should not recieve the blame and instead the malevolence is *inevitable*, as *everything and all happenings in the universe*, including the creation of time machine, is pre-destinied and unchangeable.<br />
Or, does *free will* actually exist? (This starts to sound mysterious and even spiritual)<br />
As well, if time travelling become possible, we can exploit time travelling to do computations, like Hans Moravec outlined in his book &#8220;Robots: Mere Machine to Transcendent Intelligence&#8221;. A time-travelling computer could recieve an question (of a complex and time-consuming problem), compute a solution (which takes a long time), and send the solution *backward in time* to a point-in-time that&#8217;s immediately after the human user asked the question.  (This is only a very elementary example, I guess if we actually achieve time-travel we will invent even more elaborate computing techniques)</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: WLGJR</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-99376</link>
		<dc:creator>WLGJR</dc:creator>
		<pubDate>Sun, 17 Feb 2013 07:43:11 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-99376</guid>
		<description>What started out as greedy corporations will (probably) eventually give rise to the Net-based Superhuman Artificial Intelligences and, in the far future, rule the post-Singularity world.
Yes, the world is unfair.</description>
		<content:encoded><![CDATA[<p>What started out as greedy corporations will (probably) eventually give rise to the Net-based Superhuman Artificial Intelligences and, in the far future, rule the post-Singularity world.<br />
Yes, the world is unfair.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Teddybear</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-99331</link>
		<dc:creator>Teddybear</dc:creator>
		<pubDate>Sun, 17 Feb 2013 02:17:52 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-99331</guid>
		<description>About &quot;sense of purpose&quot;:

Machine/ AI is not an isolated island, it is always with human&#039;s endeavor. Presently, machine and internet supplement human&#039;s research and everyday cognitive development. It does not have an &quot;isolated sense of purpose&quot;. Yet, with human involvement, it does have a &quot;symbiotic sense of purpose&quot;.

Come back to Einstein&#039;s era.Then, the machine is paper and pencil. Then, the internet is day to day interaction among researchers and conferences. Then, the machine could not have a separate sense of purpose. Yet, with Einstein&#039;s hand, his pen wrote beautiful formula.

Human being are symbiotic with the tools developed by its own.

Another point is how much sense of purpose could be judged as &quot;sense of purpose&quot;. It&#039;s more based on technological complexity. Technologically complex enough societies show a higher degree of &quot;sense of purpose&quot;. Like today&#039;s developed world, compared with most societies two thousand years ago.
High technology, itself, has more sense of purpose than low technology.Like computer/internet, compared with pen and paper.

Another point is time travelling. What if question. Alternative history question. The soundness of these kind of questions are often questioned.</description>
		<content:encoded><![CDATA[<p>About &#8220;sense of purpose&#8221;:</p>
<p>Machine/ AI is not an isolated island, it is always with human&#8217;s endeavor. Presently, machine and internet supplement human&#8217;s research and everyday cognitive development. It does not have an &#8220;isolated sense of purpose&#8221;. Yet, with human involvement, it does have a &#8220;symbiotic sense of purpose&#8221;.</p>
<p>Come back to Einstein&#8217;s era.Then, the machine is paper and pencil. Then, the internet is day to day interaction among researchers and conferences. Then, the machine could not have a separate sense of purpose. Yet, with Einstein&#8217;s hand, his pen wrote beautiful formula.</p>
<p>Human being are symbiotic with the tools developed by its own.</p>
<p>Another point is how much sense of purpose could be judged as &#8220;sense of purpose&#8221;. It&#8217;s more based on technological complexity. Technologically complex enough societies show a higher degree of &#8220;sense of purpose&#8221;. Like today&#8217;s developed world, compared with most societies two thousand years ago.<br />
High technology, itself, has more sense of purpose than low technology.Like computer/internet, compared with pen and paper.</p>
<p>Another point is time travelling. What if question. Alternative history question. The soundness of these kind of questions are often questioned.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Teddybear</title>
		<link>http://www.kurzweilai.net/ask-ray-how-to-create-a-mind-thought-experiment/comment-page-1#comment-99326</link>
		<dc:creator>Teddybear</dc:creator>
		<pubDate>Sun, 17 Feb 2013 01:57:03 +0000</pubDate>
		<guid isPermaLink="false">http://www.kurzweilai.net/?p=179970#comment-99326</guid>
		<description>The back link and linking system of internet is, from the beginning, curiosity friendly.

Google is the dominant magic powder for curiosity.</description>
		<content:encoded><![CDATA[<p>The back link and linking system of internet is, from the beginning, curiosity friendly.</p>
<p>Google is the dominant magic powder for curiosity.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
