<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article10_01_30_1555237</id>
	<title>Evolving Robots Learn To Prey On Each Other</title>
	<author>Soulskill</author>
	<datestamp>1264871700000</datestamp>
	<htmltext>quaith writes <i>"Dario Floreano and Laurent Keller report in <em>PLoS ONE</em> how <a href="http://www.plosbiology.org/article/info\%3Adoi\%2F10.1371\%2Fjournal.pbio.1000292">their robots were able to rapidly evolve complex behaviors</a> such as collision-free movement, homing, predator versus prey strategies, cooperation, and even altruism. A hundred generations of selection controlled by a simple neural network were sufficient to allow robots to evolve these behaviors. Their robots initially exhibited completely uncoordinated behavior, but as they evolved, the robots were able to <a href="http://www.popsci.com/science/article/2010-01/robots-display-predator-prey-co-evolution-evolve-better-homing-techniques">orientate, escape predators, and even cooperate</a>. The authors point out that this confirms a proposal by Alan Turing who suggested in the 1950s that building machines capable of adaptation and learning would be too difficult for a human designer and could instead be done using an evolutionary process. The robots aren't yet ready to compete in Robot Wars, but they're still pretty impressive."</i></htmltext>
<tokenext>quaith writes " Dario Floreano and Laurent Keller report in PLoS ONE how their robots were able to rapidly evolve complex behaviors such as collision-free movement , homing , predator versus prey strategies , cooperation , and even altruism .
A hundred generations of selection controlled by a simple neural network were sufficient to allow robots to evolve these behaviors .
Their robots initially exhibited completely uncoordinated behavior , but as they evolved , the robots were able to orientate , escape predators , and even cooperate .
The authors point out that this confirms a proposal by Alan Turing who suggested in the 1950s that building machines capable of adaptation and learning would be too difficult for a human designer and could instead be done using an evolutionary process .
The robots are n't yet ready to compete in Robot Wars , but they 're still pretty impressive .
"</tokentext>
<sentencetext>quaith writes "Dario Floreano and Laurent Keller report in PLoS ONE how their robots were able to rapidly evolve complex behaviors such as collision-free movement, homing, predator versus prey strategies, cooperation, and even altruism.
A hundred generations of selection controlled by a simple neural network were sufficient to allow robots to evolve these behaviors.
Their robots initially exhibited completely uncoordinated behavior, but as they evolved, the robots were able to orientate, escape predators, and even cooperate.
The authors point out that this confirms a proposal by Alan Turing who suggested in the 1950s that building machines capable of adaptation and learning would be too difficult for a human designer and could instead be done using an evolutionary process.
The robots aren't yet ready to compete in Robot Wars, but they're still pretty impressive.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30966680</id>
	<title>Re:The word is "orient", not "orientate"</title>
	<author>Anonymous</author>
	<datestamp>1264856520000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Nevertheless, I was able to interpretate the summary easily enough.</p></htmltext>
<tokenext>Nevertheless , I was able to interpretate the summary easily enough .</tokentext>
<sentencetext>Nevertheless, I was able to interpretate the summary easily enough.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963966</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30999386</id>
	<title>Re:paper was in PLoS Biology not PLoS One</title>
	<author>Anonymous</author>
	<datestamp>1265140560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>From what I remember having learned between Algorithms, AI, and Game Design classes a little while back, they pretty much have each robot randomly pick some "traits" and act with those as their modus operandi.  At the end of a "generation", the system evaluates which traits did "better" and gives those traits a better chance of being selected for the next generation.  After many generations of this, you generally end up with the best of the best.</p><p>Of course, the humans write the evaluations for what is considered to be successful and how you pick the best, but point is to have the software adapt itself to "learn" the best ways to meet those evaluations.  From what I remember, this is usually done with just a few variables to hone down the best values to use for those variables, but it sounds like this experiment took it a few steps further.</p></htmltext>
<tokenext>From what I remember having learned between Algorithms , AI , and Game Design classes a little while back , they pretty much have each robot randomly pick some " traits " and act with those as their modus operandi .
At the end of a " generation " , the system evaluates which traits did " better " and gives those traits a better chance of being selected for the next generation .
After many generations of this , you generally end up with the best of the best.Of course , the humans write the evaluations for what is considered to be successful and how you pick the best , but point is to have the software adapt itself to " learn " the best ways to meet those evaluations .
From what I remember , this is usually done with just a few variables to hone down the best values to use for those variables , but it sounds like this experiment took it a few steps further .</tokentext>
<sentencetext>From what I remember having learned between Algorithms, AI, and Game Design classes a little while back, they pretty much have each robot randomly pick some "traits" and act with those as their modus operandi.
At the end of a "generation", the system evaluates which traits did "better" and gives those traits a better chance of being selected for the next generation.
After many generations of this, you generally end up with the best of the best.Of course, the humans write the evaluations for what is considered to be successful and how you pick the best, but point is to have the software adapt itself to "learn" the best ways to meet those evaluations.
From what I remember, this is usually done with just a few variables to hone down the best values to use for those variables, but it sounds like this experiment took it a few steps further.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963910</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964002</id>
	<title>Reminds me of the Mall</title>
	<author>Herkum01</author>
	<datestamp>1264878780000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>The predator and prey bots reminds me of sales people chasing around after anyone who wanders too closely while they try their sales pitch.</p></htmltext>
<tokenext>The predator and prey bots reminds me of sales people chasing around after anyone who wanders too closely while they try their sales pitch .</tokentext>
<sentencetext>The predator and prey bots reminds me of sales people chasing around after anyone who wanders too closely while they try their sales pitch.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963390</id>
	<title>A preemptive</title>
	<author>Anonymous</author>
	<datestamp>1264875420000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext><p>What could possibly go wrong?!</p></htmltext>
<tokenext>What could possibly go wrong ?
!</tokentext>
<sentencetext>What could possibly go wrong?
!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964322</id>
	<title>Not really learning...</title>
	<author>Anonymous</author>
	<datestamp>1264880820000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>... everything about the experiments is setup and designed, no real world intelligence evolved like this.  This is more like being in control of the weather, causing it to snow, laying the snow just right, then rolling up a big enough ball of snow and letting it down the side of a mountain.</p></htmltext>
<tokenext>... everything about the experiments is setup and designed , no real world intelligence evolved like this .
This is more like being in control of the weather , causing it to snow , laying the snow just right , then rolling up a big enough ball of snow and letting it down the side of a mountain .</tokentext>
<sentencetext>... everything about the experiments is setup and designed, no real world intelligence evolved like this.
This is more like being in control of the weather, causing it to snow, laying the snow just right, then rolling up a big enough ball of snow and letting it down the side of a mountain.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30966764</id>
	<title>Re:paper was in PLoS Biology not PLoS One</title>
	<author>Anonymous</author>
	<datestamp>1264857300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Thanks for pointing that out. My mistake. It was in PLoS Biology. I'll be more careful about distinguishing one PLoS from another in the future.</htmltext>
<tokenext>Thanks for pointing that out .
My mistake .
It was in PLoS Biology .
I 'll be more careful about distinguishing one PLoS from another in the future .</tokentext>
<sentencetext>Thanks for pointing that out.
My mistake.
It was in PLoS Biology.
I'll be more careful about distinguishing one PLoS from another in the future.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963398</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963410</id>
	<title>In Time</title>
	<author>Anonymous</author>
	<datestamp>1264875480000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>In time you can expect these robots to develop useful behaviours. Not those of most interest to grad students. Such behaviours will include microwaving food and contributing to slashdot.</htmltext>
<tokenext>In time you can expect these robots to develop useful behaviours .
Not those of most interest to grad students .
Such behaviours will include microwaving food and contributing to slashdot .</tokentext>
<sentencetext>In time you can expect these robots to develop useful behaviours.
Not those of most interest to grad students.
Such behaviours will include microwaving food and contributing to slashdot.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964132</id>
	<title>Error in summary</title>
	<author>mizaru</author>
	<datestamp>1264879380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>According to TFA, the robots were controlled by neural networks, not the selection process.</htmltext>
<tokenext>According to TFA , the robots were controlled by neural networks , not the selection process .</tokentext>
<sentencetext>According to TFA, the robots were controlled by neural networks, not the selection process.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30965018</id>
	<title>Correct but...</title>
	<author>raftpeople</author>
	<datestamp>1264842600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Your point is valid if the genotype to phenotype mapping is a simple mapping to neuron type, connection type, weights, etc.  However, we clearly have effective crossover in humans, which means there can be a genotype to phenotype mapping that operates at more of a functional level.  It's an interesting and difficult problem.</htmltext>
<tokenext>Your point is valid if the genotype to phenotype mapping is a simple mapping to neuron type , connection type , weights , etc .
However , we clearly have effective crossover in humans , which means there can be a genotype to phenotype mapping that operates at more of a functional level .
It 's an interesting and difficult problem .</tokentext>
<sentencetext>Your point is valid if the genotype to phenotype mapping is a simple mapping to neuron type, connection type, weights, etc.
However, we clearly have effective crossover in humans, which means there can be a genotype to phenotype mapping that operates at more of a functional level.
It's an interesting and difficult problem.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963942</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964982</id>
	<title>Have Them Spend More Time With Humans</title>
	<author>Nom du Keyboard</author>
	<datestamp>1264842360000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>If you want to do this right then have your robots spend more time with humans than other robots. This way we can evolve a robot who plays well with other people, than with other robots.
<br> <br>
That's what I'm sure my favorite robot SF authors -- Elf Sternberg and D.B. Story -- have planned for their robots.  I would love to meet either of their creations.</htmltext>
<tokenext>If you want to do this right then have your robots spend more time with humans than other robots .
This way we can evolve a robot who plays well with other people , than with other robots .
That 's what I 'm sure my favorite robot SF authors -- Elf Sternberg and D.B .
Story -- have planned for their robots .
I would love to meet either of their creations .</tokentext>
<sentencetext>If you want to do this right then have your robots spend more time with humans than other robots.
This way we can evolve a robot who plays well with other people, than with other robots.
That's what I'm sure my favorite robot SF authors -- Elf Sternberg and D.B.
Story -- have planned for their robots.
I would love to meet either of their creations.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30966530</id>
	<title>How *fast* do these things evolve, and uh...</title>
	<author>Anonymous</author>
	<datestamp>1264855380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Are they connected to the internet?</p><p>Just asking, cause I might need to start escaping now, you know?</p></htmltext>
<tokenext>Are they connected to the internet ? Just asking , cause I might need to start escaping now , you know ?</tokentext>
<sentencetext>Are they connected to the internet?Just asking, cause I might need to start escaping now, you know?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964428</id>
	<title>As If Asimov Wrote Childhood's End</title>
	<author>Anonymous</author>
	<datestamp>1264881780000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>A new meaning to next....</p></htmltext>
<tokenext>A new meaning to next... .</tokentext>
<sentencetext>A new meaning to next....</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964208</id>
	<title>Second Variety</title>
	<author>DeadPixels</author>
	<datestamp>1264879920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Anyone else reminded of <a href="http://en.wikipedia.org/wiki/Second\_Variety" title="wikipedia.org">that Philip K Dick story "Second Variety"</a> [wikipedia.org]?<br> <br>

Spoiler for the story - since it's basically the ending - but the point in question: <p><div class="quote"><p>As the Tasso models approach, Hendricks notices the bombs clipped to their belts, and recalls that first Tasso used one to destroy other claws. At his end, Hendricks is vaguely comforted by the thought that the claws are designing, developing, and producing weapons meant for killing other claws.</p></div></div>
	</htmltext>
<tokenext>Anyone else reminded of that Philip K Dick story " Second Variety " [ wikipedia.org ] ?
Spoiler for the story - since it 's basically the ending - but the point in question : As the Tasso models approach , Hendricks notices the bombs clipped to their belts , and recalls that first Tasso used one to destroy other claws .
At his end , Hendricks is vaguely comforted by the thought that the claws are designing , developing , and producing weapons meant for killing other claws .</tokentext>
<sentencetext>Anyone else reminded of that Philip K Dick story "Second Variety" [wikipedia.org]?
Spoiler for the story - since it's basically the ending - but the point in question: As the Tasso models approach, Hendricks notices the bombs clipped to their belts, and recalls that first Tasso used one to destroy other claws.
At his end, Hendricks is vaguely comforted by the thought that the claws are designing, developing, and producing weapons meant for killing other claws.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30965438</id>
	<title>Spell It</title>
	<author>b4upoo</author>
	<datestamp>1264845660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>      Robots might orient themselves but orientating themselves must involve eating potatos while finding their directions.</p></htmltext>
<tokenext>Robots might orient themselves but orientating themselves must involve eating potatos while finding their directions .</tokentext>
<sentencetext>      Robots might orient themselves but orientating themselves must involve eating potatos while finding their directions.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30965044</id>
	<title>similar idea for genetic algorithms</title>
	<author>AlgorithMan</author>
	<datestamp>1264842840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I thought you might have male and female algorithms for some optimization problems "walking around" in a virtual world, mate to create combined algorithms, giving you some kind of blood-relationship between the algorithms (and yes, they should die after some time). the related algorithms would form "clans" and if males of opposing clans meet, they fight over each others ressources (RAM and CPU-Cycles) and there should be sources of these ressources, which should dry out over time, so the algorithms HAVE to migrate and attack each other to get new ressources... and fighting should drain some ressources (but having more ressources should give you an advantage in a fight, like being allowed to run longer or use more RAM)<br>
having more ressources should make the male algorithms more attractive to the female algorithms and there should be some different kinds of "agressiveness"...<br> <br>

this is exactly how nature evolved our brains - this should really work well for genetic algorithms...<br> <br>

ultimately when one algorithm owns all the ressources, he's "the result"...<br> <br>

I only fear that the algorithms might develop some p*ssy features like compassion, culture, science, sharing, etc. This would make the results weak and useless! I'll have to hard-code religious fundamentalists, rabble-rousers, RIAA lawyers and republicans! And I have to give different clans different commandments so they can fight about who's commandments are the commandments by the one, true GOD!!! MUAHAHAHA<br> <br>

and god damn, I'll have to give a name to the ressources... I might call them spice... or oil...</htmltext>
<tokenext>I thought you might have male and female algorithms for some optimization problems " walking around " in a virtual world , mate to create combined algorithms , giving you some kind of blood-relationship between the algorithms ( and yes , they should die after some time ) .
the related algorithms would form " clans " and if males of opposing clans meet , they fight over each others ressources ( RAM and CPU-Cycles ) and there should be sources of these ressources , which should dry out over time , so the algorithms HAVE to migrate and attack each other to get new ressources... and fighting should drain some ressources ( but having more ressources should give you an advantage in a fight , like being allowed to run longer or use more RAM ) having more ressources should make the male algorithms more attractive to the female algorithms and there should be some different kinds of " agressiveness " .. . this is exactly how nature evolved our brains - this should really work well for genetic algorithms.. . ultimately when one algorithm owns all the ressources , he 's " the result " .. . I only fear that the algorithms might develop some p * ssy features like compassion , culture , science , sharing , etc .
This would make the results weak and useless !
I 'll have to hard-code religious fundamentalists , rabble-rousers , RIAA lawyers and republicans !
And I have to give different clans different commandments so they can fight about who 's commandments are the commandments by the one , true GOD ! ! !
MUAHAHAHA and god damn , I 'll have to give a name to the ressources... I might call them spice... or oil.. .</tokentext>
<sentencetext>I thought you might have male and female algorithms for some optimization problems "walking around" in a virtual world, mate to create combined algorithms, giving you some kind of blood-relationship between the algorithms (and yes, they should die after some time).
the related algorithms would form "clans" and if males of opposing clans meet, they fight over each others ressources (RAM and CPU-Cycles) and there should be sources of these ressources, which should dry out over time, so the algorithms HAVE to migrate and attack each other to get new ressources... and fighting should drain some ressources (but having more ressources should give you an advantage in a fight, like being allowed to run longer or use more RAM)
having more ressources should make the male algorithms more attractive to the female algorithms and there should be some different kinds of "agressiveness"... 

this is exactly how nature evolved our brains - this should really work well for genetic algorithms... 

ultimately when one algorithm owns all the ressources, he's "the result"... 

I only fear that the algorithms might develop some p*ssy features like compassion, culture, science, sharing, etc.
This would make the results weak and useless!
I'll have to hard-code religious fundamentalists, rabble-rousers, RIAA lawyers and republicans!
And I have to give different clans different commandments so they can fight about who's commandments are the commandments by the one, true GOD!!!
MUAHAHAHA 

and god damn, I'll have to give a name to the ressources... I might call them spice... or oil...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963730</id>
	<title>And I predict</title>
	<author>Colin Smith</author>
	<datestamp>1264877220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Skynet will be evolved in JavaScript.</p><p>
&nbsp;</p></htmltext>
<tokenext>Skynet will be evolved in JavaScript .
 </tokentext>
<sentencetext>Skynet will be evolved in JavaScript.
 </sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963390</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30966136</id>
	<title>Re:The word is "orient", not "orientate"</title>
	<author>Facegarden</author>
	<datestamp>1264851300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>The noun "<a href="http://dictionary.reference.com/browse/orientation" title="reference.com">orientation</a> [reference.com]" is derived from the verb "<a href="http://dictionary.reference.com/browse/orient" title="reference.com">orient</a> [reference.com]", not the other way around.</p></div><p>Thanks god someone mentioned that! I absolutely hate when people use that "word". It's just... wrong. It's like when people say "funner". Who cares if you know what the person meant, they're still butchering English and if they're a native speaker, that's just ridiculous.<br>-Taylor</p></div>
	</htmltext>
<tokenext>The noun " orientation [ reference.com ] " is derived from the verb " orient [ reference.com ] " , not the other way around.Thanks god someone mentioned that !
I absolutely hate when people use that " word " .
It 's just... wrong. It 's like when people say " funner " .
Who cares if you know what the person meant , they 're still butchering English and if they 're a native speaker , that 's just ridiculous.-Taylor</tokentext>
<sentencetext>The noun "orientation [reference.com]" is derived from the verb "orient [reference.com]", not the other way around.Thanks god someone mentioned that!
I absolutely hate when people use that "word".
It's just... wrong. It's like when people say "funner".
Who cares if you know what the person meant, they're still butchering English and if they're a native speaker, that's just ridiculous.-Taylor
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963966</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964038</id>
	<title>So what's new?</title>
	<author>DerekLyons</author>
	<datestamp>1264878900000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>This kind of behavior was first demonstrated/modeled (AFAIK/IIRC) as part of the <a href="http://life.ou.edu/tierra/" title="ou.edu">Tierra</a> [ou.edu] simulations almost twenty years ago.  Though I don't have a reference to hand, I know it's been done in neural networks before too.<br>
&nbsp; <br>So other than the 'sizzle' (as opposed to 'steak') of doing it with robots, can anyone explain what is new here?</p></htmltext>
<tokenext>This kind of behavior was first demonstrated/modeled ( AFAIK/IIRC ) as part of the Tierra [ ou.edu ] simulations almost twenty years ago .
Though I do n't have a reference to hand , I know it 's been done in neural networks before too .
  So other than the 'sizzle ' ( as opposed to 'steak ' ) of doing it with robots , can anyone explain what is new here ?</tokentext>
<sentencetext>This kind of behavior was first demonstrated/modeled (AFAIK/IIRC) as part of the Tierra [ou.edu] simulations almost twenty years ago.
Though I don't have a reference to hand, I know it's been done in neural networks before too.
  So other than the 'sizzle' (as opposed to 'steak') of doing it with robots, can anyone explain what is new here?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964442</id>
	<title>Controlled by neural net?</title>
	<author>zippthorne</author>
	<datestamp>1264881780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Surely the robots were themselves controlled by neural nets which were selected by Genetic Algorithm, rather than using a neural net to control the selection process itself.  Perhaps if I RTFA...</p></htmltext>
<tokenext>Surely the robots were themselves controlled by neural nets which were selected by Genetic Algorithm , rather than using a neural net to control the selection process itself .
Perhaps if I RTFA.. .</tokentext>
<sentencetext>Surely the robots were themselves controlled by neural nets which were selected by Genetic Algorithm, rather than using a neural net to control the selection process itself.
Perhaps if I RTFA...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963866</id>
	<title>Re:paper was in PLoS Biology not PLoS One</title>
	<author>Xinvoker</author>
	<datestamp>1264877940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Correct, and here is the original article, for hardcore RTFA'ers  <a href="http://www.plosbiology.org/article/info\%3Adoi\%2F10.1371\%2Fjournal.pbio.1000292" title="plosbiology.org" rel="nofollow">http://www.plosbiology.org/article/info\%3Adoi\%2F10.1371\%2Fjournal.pbio.1000292</a> [plosbiology.org]</htmltext>
<tokenext>Correct , and here is the original article , for hardcore RTFA'ers http : //www.plosbiology.org/article/info \ % 3Adoi \ % 2F10.1371 \ % 2Fjournal.pbio.1000292 [ plosbiology.org ]</tokentext>
<sentencetext>Correct, and here is the original article, for hardcore RTFA'ers  http://www.plosbiology.org/article/info\%3Adoi\%2F10.1371\%2Fjournal.pbio.1000292 [plosbiology.org]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963398</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963446</id>
	<title>And then they'll develop religion...</title>
	<author>the\_humeister</author>
	<datestamp>1264875720000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>And those who aren't saved go to robot hell, and must play the fiddle and beat the robot devil in order to leave.</p></htmltext>
<tokenext>And those who are n't saved go to robot hell , and must play the fiddle and beat the robot devil in order to leave .</tokentext>
<sentencetext>And those who aren't saved go to robot hell, and must play the fiddle and beat the robot devil in order to leave.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30965772</id>
	<title>Re:And then they'll develop religion...</title>
	<author>ducomputergeek</author>
	<datestamp>1264848480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I thought they had to accept the One True God.....based on the mind of a spoiled 15 year old girl...</p></htmltext>
<tokenext>I thought they had to accept the One True God.....based on the mind of a spoiled 15 year old girl.. .</tokentext>
<sentencetext>I thought they had to accept the One True God.....based on the mind of a spoiled 15 year old girl...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963446</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30965334</id>
	<title>Oblig.</title>
	<author>Anonymous</author>
	<datestamp>1264844940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Evolving robots learn to prey on each other.</p><p>They're calling them [insert despised political party here].</p></htmltext>
<tokenext>Evolving robots learn to prey on each other.They 're calling them [ insert despised political party here ] .</tokentext>
<sentencetext>Evolving robots learn to prey on each other.They're calling them [insert despised political party here].</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963836</id>
	<title>Re:But the real question is...</title>
	<author>ae1294</author>
	<datestamp>1264877760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>do they believe in God?</p></div><p>Yes but he's a robot that lives in the sky and made them in his own image...</p></div>
	</htmltext>
<tokenext>do they believe in God ? Yes but he 's a robot that lives in the sky and made them in his own image.. .</tokentext>
<sentencetext>do they believe in God?Yes but he's a robot that lives in the sky and made them in his own image...
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963662</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30966014</id>
	<title>Re:How to Survive a Robot Uprising</title>
	<author>Anonymous</author>
	<datestamp>1264850580000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Using a camera's flashlight of course!</p></htmltext>
<tokenext>Using a camera 's flashlight of course !</tokentext>
<sentencetext>Using a camera's flashlight of course!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963958</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964980</id>
	<title>Re:A preemptive</title>
	<author>Hurricane78</author>
	<datestamp>1264842360000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>Flash forward a couple of billions of years, and we will perhaps write them a letter, that says it as good as this one:<br><a href="http://www.youtube.com/watch?v=7KnGNOiFll4" title="youtube.com">http://www.youtube.com/watch?v=7KnGNOiFll4</a> [youtube.com] (Protip: It&rsquo;s not meant in a religious way. That&rsquo;s not the point.<nobr> <wbr></nobr>:)<br>(Btw, if you like it, and like really great poetry, try this: <a href="http://www.youtube.com/watch?v=i5e5FUvRzNQ" title="youtube.com">http://www.youtube.com/watch?v=i5e5FUvRzNQ</a> [youtube.com] )</p></htmltext>
<tokenext>Flash forward a couple of billions of years , and we will perhaps write them a letter , that says it as good as this one : http : //www.youtube.com/watch ? v = 7KnGNOiFll4 [ youtube.com ] ( Protip : It    s not meant in a religious way .
That    s not the point .
: ) ( Btw , if you like it , and like really great poetry , try this : http : //www.youtube.com/watch ? v = i5e5FUvRzNQ [ youtube.com ] )</tokentext>
<sentencetext>Flash forward a couple of billions of years, and we will perhaps write them a letter, that says it as good as this one:http://www.youtube.com/watch?v=7KnGNOiFll4 [youtube.com] (Protip: It’s not meant in a religious way.
That’s not the point.
:)(Btw, if you like it, and like really great poetry, try this: http://www.youtube.com/watch?v=i5e5FUvRzNQ [youtube.com] )</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963390</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964242</id>
	<title>Robo-shark!</title>
	<author>Xinvoker</author>
	<datestamp>1264880160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>It's worth noting that the robot in the experiment where they evolved the bodies as well, in order to run faster, looks like a shark, with a tail and 2 fins. Fascinating.


If they could do the same experiment with the ability to walk instead of only crawling (see video <a href="http://www.plosbiology.org/article/fetchSingleRepresentation.action?uri=info:doi/10.1371/journal.pbio.1000292.s006" title="plosbiology.org" rel="nofollow">http://www.plosbiology.org/article/fetchSingleRepresentation.action?uri=info:doi/10.1371/journal.pbio.1000292.s006</a> [plosbiology.org] to see what i'm talking about ) and make them do something that requires hands such as lifting something up, we could see if the optimal forms were humanoid, centaur-like, spider-like etc.</htmltext>
<tokenext>It 's worth noting that the robot in the experiment where they evolved the bodies as well , in order to run faster , looks like a shark , with a tail and 2 fins .
Fascinating . If they could do the same experiment with the ability to walk instead of only crawling ( see video http : //www.plosbiology.org/article/fetchSingleRepresentation.action ? uri = info : doi/10.1371/journal.pbio.1000292.s006 [ plosbiology.org ] to see what i 'm talking about ) and make them do something that requires hands such as lifting something up , we could see if the optimal forms were humanoid , centaur-like , spider-like etc .</tokentext>
<sentencetext>It's worth noting that the robot in the experiment where they evolved the bodies as well, in order to run faster, looks like a shark, with a tail and 2 fins.
Fascinating.


If they could do the same experiment with the ability to walk instead of only crawling (see video http://www.plosbiology.org/article/fetchSingleRepresentation.action?uri=info:doi/10.1371/journal.pbio.1000292.s006 [plosbiology.org] to see what i'm talking about ) and make them do something that requires hands such as lifting something up, we could see if the optimal forms were humanoid, centaur-like, spider-like etc.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963384</id>
	<title>first piss</title>
	<author>Anonymous</author>
	<datestamp>1264875420000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext>i bot</htmltext>
<tokenext>i bot</tokentext>
<sentencetext>i bot</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964516</id>
	<title>No confirmation</title>
	<author>Lije Baley</author>
	<datestamp>1264882380000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>This doesn't "confirm" anything about Turing's offhanded opinion.</p></htmltext>
<tokenext>This does n't " confirm " anything about Turing 's offhanded opinion .</tokentext>
<sentencetext>This doesn't "confirm" anything about Turing's offhanded opinion.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963398</id>
	<title>paper was in PLoS Biology not PLoS One</title>
	<author>Anonymous</author>
	<datestamp>1264875420000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext>Minor detail perhaps, but as Academic Editor in Chief of PLoS Biology I want to point out that the paper was in PLoS Biology not PLoS One<nobr> <wbr></nobr>...</htmltext>
<tokenext>Minor detail perhaps , but as Academic Editor in Chief of PLoS Biology I want to point out that the paper was in PLoS Biology not PLoS One .. .</tokentext>
<sentencetext>Minor detail perhaps, but as Academic Editor in Chief of PLoS Biology I want to point out that the paper was in PLoS Biology not PLoS One ...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963752</id>
	<title>They must be Muslim.</title>
	<author>Anonymous</author>
	<datestamp>1264877340000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>" the robots were able to orientate"<nobr> <wbr></nobr>... Neat. Wow. Did they have an internal compass?  Orientate means to "face east", specifically, toward Mecca.</p></htmltext>
<tokenext>" the robots were able to orientate " ... Neat. Wow .
Did they have an internal compass ?
Orientate means to " face east " , specifically , toward Mecca .</tokentext>
<sentencetext>" the robots were able to orientate" ... Neat. Wow.
Did they have an internal compass?
Orientate means to "face east", specifically, toward Mecca.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964936</id>
	<title>skynet, wopr, or maybe the cylons</title>
	<author>Joe The Dragon</author>
	<datestamp>1264842060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>skynet, wopr, or maybe the cylons</p></htmltext>
<tokenext>skynet , wopr , or maybe the cylons</tokentext>
<sentencetext>skynet, wopr, or maybe the cylons</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963390</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964022</id>
	<title>Re:paper was in PLoS Biology not PLoS One</title>
	<author>lalena</author>
	<datestamp>1264878840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>If you like the article, try this one: <a href="http://www.lalena.com/AI/Ant/" title="lalena.com">Teamwork in Genetic Programming</a> [lalena.com].
<br>I did this 15 years ago, but unfortunately I didn't have access to real robots. Just computer simulation.
<br>Simulated ants used teamwork to lift heavy pieces of food - if they all stopped and waited at the first food they found they would wait forever because there weren't enough of them. Had to have some intelligence.
<br>There were also some water crossing problems where some ants (but not all) had to sacrifice themselves to build a bridge to reach the food.
<br>Some solutions that were created by the GP were actually better than things that I had thought of on my own. Ex: I expected the ants to use pheromones to either attract or repel other ants. In one example the ants used pheromones to determine if an ant should go into the water. Every cycle every ant would release a pheromone. An ant would only enter water to build a bridge if they didn't detect any pheromones. Only the ants on the edge of the growing pheromone cloud would enter the water. After the fifth turn, no more ants would enter the water because the entire map was filled with pheromones. The ants created a way of using pheromones to measure time to limit the number of ants that died. Very unexpected but it worked faster than any other solution.</htmltext>
<tokenext>If you like the article , try this one : Teamwork in Genetic Programming [ lalena.com ] .
I did this 15 years ago , but unfortunately I did n't have access to real robots .
Just computer simulation .
Simulated ants used teamwork to lift heavy pieces of food - if they all stopped and waited at the first food they found they would wait forever because there were n't enough of them .
Had to have some intelligence .
There were also some water crossing problems where some ants ( but not all ) had to sacrifice themselves to build a bridge to reach the food .
Some solutions that were created by the GP were actually better than things that I had thought of on my own .
Ex : I expected the ants to use pheromones to either attract or repel other ants .
In one example the ants used pheromones to determine if an ant should go into the water .
Every cycle every ant would release a pheromone .
An ant would only enter water to build a bridge if they did n't detect any pheromones .
Only the ants on the edge of the growing pheromone cloud would enter the water .
After the fifth turn , no more ants would enter the water because the entire map was filled with pheromones .
The ants created a way of using pheromones to measure time to limit the number of ants that died .
Very unexpected but it worked faster than any other solution .</tokentext>
<sentencetext>If you like the article, try this one: Teamwork in Genetic Programming [lalena.com].
I did this 15 years ago, but unfortunately I didn't have access to real robots.
Just computer simulation.
Simulated ants used teamwork to lift heavy pieces of food - if they all stopped and waited at the first food they found they would wait forever because there weren't enough of them.
Had to have some intelligence.
There were also some water crossing problems where some ants (but not all) had to sacrifice themselves to build a bridge to reach the food.
Some solutions that were created by the GP were actually better than things that I had thought of on my own.
Ex: I expected the ants to use pheromones to either attract or repel other ants.
In one example the ants used pheromones to determine if an ant should go into the water.
Every cycle every ant would release a pheromone.
An ant would only enter water to build a bridge if they didn't detect any pheromones.
Only the ants on the edge of the growing pheromone cloud would enter the water.
After the fifth turn, no more ants would enter the water because the entire map was filled with pheromones.
The ants created a way of using pheromones to measure time to limit the number of ants that died.
Very unexpected but it worked faster than any other solution.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963602</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963850</id>
	<title>Robot Singularity</title>
	<author>Oceanplexian</author>
	<datestamp>1264877820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I've always thought that "Real AI" wasn't something we could design, but would need to evolve to the point of intelligence. We already know it works, it's just a matter of application.<br><br>What if this was allowed to span not 50, but 50,000 or 50,000,000 generations?<br>Now imagine all the time it took us to evolve in that capacity and do it in the span of a few minutes.<br><br>I think the ability to have AI is already solved by today's hardware; we just need the right kind of software.</htmltext>
<tokenext>I 've always thought that " Real AI " was n't something we could design , but would need to evolve to the point of intelligence .
We already know it works , it 's just a matter of application.What if this was allowed to span not 50 , but 50,000 or 50,000,000 generations ? Now imagine all the time it took us to evolve in that capacity and do it in the span of a few minutes.I think the ability to have AI is already solved by today 's hardware ; we just need the right kind of software .</tokentext>
<sentencetext>I've always thought that "Real AI" wasn't something we could design, but would need to evolve to the point of intelligence.
We already know it works, it's just a matter of application.What if this was allowed to span not 50, but 50,000 or 50,000,000 generations?Now imagine all the time it took us to evolve in that capacity and do it in the span of a few minutes.I think the ability to have AI is already solved by today's hardware; we just need the right kind of software.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30970252</id>
	<title>Why robots?</title>
	<author>Anonymous</author>
	<datestamp>1264952520000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I can never understand why experiments like this use robots, when all the robots actions could  be easily encapsulated in software.  I think this vein of research may lead to some interesting mechanical advances, but real AI progress will happen much faster by people using more efficient techniques.</p></htmltext>
<tokenext>I can never understand why experiments like this use robots , when all the robots actions could be easily encapsulated in software .
I think this vein of research may lead to some interesting mechanical advances , but real AI progress will happen much faster by people using more efficient techniques .</tokentext>
<sentencetext>I can never understand why experiments like this use robots, when all the robots actions could  be easily encapsulated in software.
I think this vein of research may lead to some interesting mechanical advances, but real AI progress will happen much faster by people using more efficient techniques.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30967644</id>
	<title>A simulation I developed around 1987...</title>
	<author>Paul Fernhout</author>
	<datestamp>1264866360000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>A simulation I developed around 1987 had 2D robots that duplicated themselves from a sea of parts. They would build themselves up and then cut themselves apart to make two copies. To my knowledge, it was the first 2D simulation of self-replicating robots from a sea of parts. The first time it worked, one robot started canibalizing the other to build itself up again. I had to add a sense of "smell" to stop robots from taking parts from their offspring. As another poster referenced, Philip K. Dick's point on identity in 1953 was very prescient:<br>
&nbsp; &nbsp; <a href="http://en.wikipedia.org/wiki/Second\_Variety" title="wikipedia.org">http://en.wikipedia.org/wiki/Second\_Variety</a> [wikipedia.org]<br>"Dick said of the story: "My grand theme -- who is human and who only appears (masquerading) as human? -- emerges most fully. Unless we can individually and collectively be certain of the answer to this question, we face what is, in my view, the most serious problem possible. Without answering it adequately, we cannot even be certain of our own selves. I cannot even know myself, let alone you. So I keep working on this theme; to me nothing is as important a question. And the answer comes very hard.""</p><p>However, those robots were not evolving. I presented a talk on that simulation at a workshop on AI and Simulation in 1988 in Minnesota, saying how hard easy it was to make robots that were destructive, but how much harder it would be to make them cooperative. A major from DARPA literally patted me on the back and told me to "keep up the good work". To his credit, I'm not sure which aspect (destructive or cooperative) he was talking about working on.<nobr> <wbr></nobr>:-) But I left that field around that time for several reasons (including concerns about military funding and use of this stuff, but also that it seemed like we knew enough to destroy ourselves with this stuff but not enough to make it something wonderful). At the same workshop someone presented something on a simulation of organisms with neural networks that learned different behaviors. A professor I took a course from at SUNY Stony Brook has done some interesting stuff on evolution and communications with simple organisms:<br>
&nbsp; &nbsp; <a href="http://www.stonybrook.edu/philosophy//faculty/pgrim/pgrim\_publications.html" title="stonybrook.edu">http://www.stonybrook.edu/philosophy//faculty/pgrim/pgrim\_publications.html</a> [stonybrook.edu]<br>Anyway, in the quarter century almost since then, what I have learned is that the greatest challenge of the 21st century is the tools of abundance like self-replicating robots (or nanotech, biotech, nuclear energy, networking, bureaucracy, and others things) in the hands of those still preoccupied with fighting over percieved scarcity, or worse, creating artificial scarcity. What could be more ironic than using nuclear missiles to fight over Earthly oil fields, when the same sorts of techology and organizations could let us build space habitats and big renewable energy complexes (or nuclear power too). What is more ironic than building killer robots to enforce social norms related to forcing people to sell their labor doing repetitive work in order to gain the right to consume, rather than just build robots to do the work? Anyway, it won't be the robots that kill us off. It will be the unexamined irony.<nobr> <wbr></nobr>:-)<br>
&nbsp; &nbsp;</p></htmltext>
<tokenext>A simulation I developed around 1987 had 2D robots that duplicated themselves from a sea of parts .
They would build themselves up and then cut themselves apart to make two copies .
To my knowledge , it was the first 2D simulation of self-replicating robots from a sea of parts .
The first time it worked , one robot started canibalizing the other to build itself up again .
I had to add a sense of " smell " to stop robots from taking parts from their offspring .
As another poster referenced , Philip K. Dick 's point on identity in 1953 was very prescient :     http : //en.wikipedia.org/wiki/Second \ _Variety [ wikipedia.org ] " Dick said of the story : " My grand theme -- who is human and who only appears ( masquerading ) as human ?
-- emerges most fully .
Unless we can individually and collectively be certain of the answer to this question , we face what is , in my view , the most serious problem possible .
Without answering it adequately , we can not even be certain of our own selves .
I can not even know myself , let alone you .
So I keep working on this theme ; to me nothing is as important a question .
And the answer comes very hard .
" " However , those robots were not evolving .
I presented a talk on that simulation at a workshop on AI and Simulation in 1988 in Minnesota , saying how hard easy it was to make robots that were destructive , but how much harder it would be to make them cooperative .
A major from DARPA literally patted me on the back and told me to " keep up the good work " .
To his credit , I 'm not sure which aspect ( destructive or cooperative ) he was talking about working on .
: - ) But I left that field around that time for several reasons ( including concerns about military funding and use of this stuff , but also that it seemed like we knew enough to destroy ourselves with this stuff but not enough to make it something wonderful ) .
At the same workshop someone presented something on a simulation of organisms with neural networks that learned different behaviors .
A professor I took a course from at SUNY Stony Brook has done some interesting stuff on evolution and communications with simple organisms :     http : //www.stonybrook.edu/philosophy//faculty/pgrim/pgrim \ _publications.html [ stonybrook.edu ] Anyway , in the quarter century almost since then , what I have learned is that the greatest challenge of the 21st century is the tools of abundance like self-replicating robots ( or nanotech , biotech , nuclear energy , networking , bureaucracy , and others things ) in the hands of those still preoccupied with fighting over percieved scarcity , or worse , creating artificial scarcity .
What could be more ironic than using nuclear missiles to fight over Earthly oil fields , when the same sorts of techology and organizations could let us build space habitats and big renewable energy complexes ( or nuclear power too ) .
What is more ironic than building killer robots to enforce social norms related to forcing people to sell their labor doing repetitive work in order to gain the right to consume , rather than just build robots to do the work ?
Anyway , it wo n't be the robots that kill us off .
It will be the unexamined irony .
: - )    </tokentext>
<sentencetext>A simulation I developed around 1987 had 2D robots that duplicated themselves from a sea of parts.
They would build themselves up and then cut themselves apart to make two copies.
To my knowledge, it was the first 2D simulation of self-replicating robots from a sea of parts.
The first time it worked, one robot started canibalizing the other to build itself up again.
I had to add a sense of "smell" to stop robots from taking parts from their offspring.
As another poster referenced, Philip K. Dick's point on identity in 1953 was very prescient:
    http://en.wikipedia.org/wiki/Second\_Variety [wikipedia.org]"Dick said of the story: "My grand theme -- who is human and who only appears (masquerading) as human?
-- emerges most fully.
Unless we can individually and collectively be certain of the answer to this question, we face what is, in my view, the most serious problem possible.
Without answering it adequately, we cannot even be certain of our own selves.
I cannot even know myself, let alone you.
So I keep working on this theme; to me nothing is as important a question.
And the answer comes very hard.
""However, those robots were not evolving.
I presented a talk on that simulation at a workshop on AI and Simulation in 1988 in Minnesota, saying how hard easy it was to make robots that were destructive, but how much harder it would be to make them cooperative.
A major from DARPA literally patted me on the back and told me to "keep up the good work".
To his credit, I'm not sure which aspect (destructive or cooperative) he was talking about working on.
:-) But I left that field around that time for several reasons (including concerns about military funding and use of this stuff, but also that it seemed like we knew enough to destroy ourselves with this stuff but not enough to make it something wonderful).
At the same workshop someone presented something on a simulation of organisms with neural networks that learned different behaviors.
A professor I took a course from at SUNY Stony Brook has done some interesting stuff on evolution and communications with simple organisms:
    http://www.stonybrook.edu/philosophy//faculty/pgrim/pgrim\_publications.html [stonybrook.edu]Anyway, in the quarter century almost since then, what I have learned is that the greatest challenge of the 21st century is the tools of abundance like self-replicating robots (or nanotech, biotech, nuclear energy, networking, bureaucracy, and others things) in the hands of those still preoccupied with fighting over percieved scarcity, or worse, creating artificial scarcity.
What could be more ironic than using nuclear missiles to fight over Earthly oil fields, when the same sorts of techology and organizations could let us build space habitats and big renewable energy complexes (or nuclear power too).
What is more ironic than building killer robots to enforce social norms related to forcing people to sell their labor doing repetitive work in order to gain the right to consume, rather than just build robots to do the work?
Anyway, it won't be the robots that kill us off.
It will be the unexamined irony.
:-)
   </sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963560</id>
	<title>I guess Isaac Asimov missed one...</title>
	<author>rossdee</author>
	<datestamp>1264876260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>There should have been a 4th law<nobr> <wbr></nobr>:-</p><p>Don't harm another robot unlessspecifically ordered to do so by a human.</p></htmltext>
<tokenext>There should have been a 4th law : -Do n't harm another robot unlessspecifically ordered to do so by a human .</tokentext>
<sentencetext>There should have been a 4th law :-Don't harm another robot unlessspecifically ordered to do so by a human.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30966792</id>
	<title>Re:The word is "orient", not "orientate"</title>
	<author>quaith</author>
	<datestamp>1264857660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>You're correct. I wasn't trying to invent a new word. Should have used "orient". Just sloppy editing on my part -- I started with a sentence that had "orientation" in it and shortened it to "orientate" while I was reworking it. Sloppy, very sloppy.</htmltext>
<tokenext>You 're correct .
I was n't trying to invent a new word .
Should have used " orient " .
Just sloppy editing on my part -- I started with a sentence that had " orientation " in it and shortened it to " orientate " while I was reworking it .
Sloppy , very sloppy .</tokentext>
<sentencetext>You're correct.
I wasn't trying to invent a new word.
Should have used "orient".
Just sloppy editing on my part -- I started with a sentence that had "orientation" in it and shortened it to "orientate" while I was reworking it.
Sloppy, very sloppy.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963966</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963596</id>
	<title>Evolving robots?</title>
	<author>Anonymous</author>
	<datestamp>1264876440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Clearly, these mechanical creatures were designed by a higher intelligence.</p></htmltext>
<tokenext>Clearly , these mechanical creatures were designed by a higher intelligence .</tokentext>
<sentencetext>Clearly, these mechanical creatures were designed by a higher intelligence.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30979096</id>
	<title>Re:The word is "orient", not "orientate"</title>
	<author>Anonymous</author>
	<datestamp>1265025120000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>i say jsut wtrie waht you wsih how you wish. fcuk 'em. lguagnae is flul of rcnduneday for a rsoean.</p></htmltext>
<tokenext>i say jsut wtrie waht you wsih how you wish .
fcuk 'em .
lguagnae is flul of rcnduneday for a rsoean .</tokentext>
<sentencetext>i say jsut wtrie waht you wsih how you wish.
fcuk 'em.
lguagnae is flul of rcnduneday for a rsoean.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963966</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963662</id>
	<title>But the real question is...</title>
	<author>Duncan J Murray</author>
	<datestamp>1264876860000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>0</modscore>
	<htmltext><p>do they believe in God?</p></htmltext>
<tokenext>do they believe in God ?</tokentext>
<sentencetext>do they believe in God?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963886</id>
	<title>Re:And then they'll develop religion...</title>
	<author>Anonymous</author>
	<datestamp>1264878000000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>But their are no calculators in hell.<br>All calculators go to silicon heaven due to being selfless.</p><p>HOW DO THEY EVER GET WORK DONE?!</p></htmltext>
<tokenext>But their are no calculators in hell.All calculators go to silicon heaven due to being selfless.HOW DO THEY EVER GET WORK DONE ?
!</tokentext>
<sentencetext>But their are no calculators in hell.All calculators go to silicon heaven due to being selfless.HOW DO THEY EVER GET WORK DONE?
!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963446</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30967774</id>
	<title>Re:The word is "orient", not "orientate"</title>
	<author>Anonymous</author>
	<datestamp>1264868280000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>According to Oxford, the word orientate exists and is equivalent to orient. http://www.askoxford.com/concise\_oed/orientate?view=uk</p><p>Wiktionary gives a description, discussing that it is British but not American English: http://en.wiktionary.org/wiki/orientate#English</p><p>Your own link claims that "orientation" is derived from "orientate" which is in turn derived from "orient".</p><p>So, I'm not sure what you are complaining about.</p></htmltext>
<tokenext>According to Oxford , the word orientate exists and is equivalent to orient .
http : //www.askoxford.com/concise \ _oed/orientate ? view = ukWiktionary gives a description , discussing that it is British but not American English : http : //en.wiktionary.org/wiki/orientate # EnglishYour own link claims that " orientation " is derived from " orientate " which is in turn derived from " orient " .So , I 'm not sure what you are complaining about .</tokentext>
<sentencetext>According to Oxford, the word orientate exists and is equivalent to orient.
http://www.askoxford.com/concise\_oed/orientate?view=ukWiktionary gives a description, discussing that it is British but not American English: http://en.wiktionary.org/wiki/orientate#EnglishYour own link claims that "orientation" is derived from "orientate" which is in turn derived from "orient".So, I'm not sure what you are complaining about.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963966</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30967750</id>
	<title>Evolved Neural Network Brains</title>
	<author>physburn</author>
	<datestamp>1264867980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>  I've used the same programming mechanism and it works
but its not learning or anything close. They create a
neural network for each robot brain, then wipe the brain
if it doesn't work well enough, and breed from the ones
that work well. The population of robots learn by
evolution, but each individual one, can't learn at all.
Real animal and people of course can learn, and
learn well, in there own lifetimes. So this learn
mechanism is far inferer to natural brains.
<p>
---
</p><p>
<a href="http://www.feeddistiller.com/blogs/Robotics/feed.html" title="feeddistiller.com">Robotics</a> [feeddistiller.com] Feed @ <a href="http://www.feeddistiller.com/" title="feeddistiller.com">Feed Distiller</a> [feeddistiller.com]</p></htmltext>
<tokenext>I 've used the same programming mechanism and it works but its not learning or anything close .
They create a neural network for each robot brain , then wipe the brain if it does n't work well enough , and breed from the ones that work well .
The population of robots learn by evolution , but each individual one , ca n't learn at all .
Real animal and people of course can learn , and learn well , in there own lifetimes .
So this learn mechanism is far inferer to natural brains .
--- Robotics [ feeddistiller.com ] Feed @ Feed Distiller [ feeddistiller.com ]</tokentext>
<sentencetext>  I've used the same programming mechanism and it works
but its not learning or anything close.
They create a
neural network for each robot brain, then wipe the brain
if it doesn't work well enough, and breed from the ones
that work well.
The population of robots learn by
evolution, but each individual one, can't learn at all.
Real animal and people of course can learn, and
learn well, in there own lifetimes.
So this learn
mechanism is far inferer to natural brains.
---

Robotics [feeddistiller.com] Feed @ Feed Distiller [feeddistiller.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30965736</id>
	<title>Re:Crossover</title>
	<author>Cthefuture</author>
	<datestamp>1264848300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Are they actually genetically evolving a traditional neural network though?  The article made it sound that way but it was light on details.  I know the words they were using but I don't know if the author knew what they meant.</p><p>There is a possibility they are just using traditional genetic algorithm stuff where the "neurons" actually represent programming logic and not just simple weight values like what a "neural network" typically is.</p><p>I am curious as to the exact methods they are using if anyone knows.</p></htmltext>
<tokenext>Are they actually genetically evolving a traditional neural network though ?
The article made it sound that way but it was light on details .
I know the words they were using but I do n't know if the author knew what they meant.There is a possibility they are just using traditional genetic algorithm stuff where the " neurons " actually represent programming logic and not just simple weight values like what a " neural network " typically is.I am curious as to the exact methods they are using if anyone knows .</tokentext>
<sentencetext>Are they actually genetically evolving a traditional neural network though?
The article made it sound that way but it was light on details.
I know the words they were using but I don't know if the author knew what they meant.There is a possibility they are just using traditional genetic algorithm stuff where the "neurons" actually represent programming logic and not just simple weight values like what a "neural network" typically is.I am curious as to the exact methods they are using if anyone knows.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963942</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963910</id>
	<title>Re:paper was in PLoS Biology not PLoS One</title>
	<author>Anonymous</author>
	<datestamp>1264878120000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>Compared to the rest of the summary, which says: "The authors point out that this confirms a proposal by Alan Turing who suggested in the 1950s that building machines capable of adaptation and learning would be too difficult for a human designer and could instead be done using an evolutionary process. The robots aren't yet ready to compete in Robot Wars, but they're still pretty impressive."  getting the journal wrong is a pretty trivial error.</p><p>These machines were designed and built by humans to be capable of adaptation and learning, so it actually proves Turing's thesis false.  They then use the adaptation and learning capability their human designers built into them to adapt and learn, but according to the very next sentence don't produce outcomes that are as good as purely human-designed ones.</p><p>So why bring Turing's name into it at all?  I suspect marketing has something to do with it.  Which is too bad, because the results themselves are quite interesting, although I'm curious how the robots reproduce... if this actually an evolutionary system rather than a merely adaptive/learning one.  For the confused:  growing children do not "evolve", except in the loosest and least interesting metaphorical sense.  They learn.  As near as I can tell these robots do the same thing.</p></htmltext>
<tokenext>Compared to the rest of the summary , which says : " The authors point out that this confirms a proposal by Alan Turing who suggested in the 1950s that building machines capable of adaptation and learning would be too difficult for a human designer and could instead be done using an evolutionary process .
The robots are n't yet ready to compete in Robot Wars , but they 're still pretty impressive .
" getting the journal wrong is a pretty trivial error.These machines were designed and built by humans to be capable of adaptation and learning , so it actually proves Turing 's thesis false .
They then use the adaptation and learning capability their human designers built into them to adapt and learn , but according to the very next sentence do n't produce outcomes that are as good as purely human-designed ones.So why bring Turing 's name into it at all ?
I suspect marketing has something to do with it .
Which is too bad , because the results themselves are quite interesting , although I 'm curious how the robots reproduce... if this actually an evolutionary system rather than a merely adaptive/learning one .
For the confused : growing children do not " evolve " , except in the loosest and least interesting metaphorical sense .
They learn .
As near as I can tell these robots do the same thing .</tokentext>
<sentencetext>Compared to the rest of the summary, which says: "The authors point out that this confirms a proposal by Alan Turing who suggested in the 1950s that building machines capable of adaptation and learning would be too difficult for a human designer and could instead be done using an evolutionary process.
The robots aren't yet ready to compete in Robot Wars, but they're still pretty impressive.
"  getting the journal wrong is a pretty trivial error.These machines were designed and built by humans to be capable of adaptation and learning, so it actually proves Turing's thesis false.
They then use the adaptation and learning capability their human designers built into them to adapt and learn, but according to the very next sentence don't produce outcomes that are as good as purely human-designed ones.So why bring Turing's name into it at all?
I suspect marketing has something to do with it.
Which is too bad, because the results themselves are quite interesting, although I'm curious how the robots reproduce... if this actually an evolutionary system rather than a merely adaptive/learning one.
For the confused:  growing children do not "evolve", except in the loosest and least interesting metaphorical sense.
They learn.
As near as I can tell these robots do the same thing.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963398</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963792</id>
	<title>I know it's blasphemy but...</title>
	<author>Xinvoker</author>
	<datestamp>1264877580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>worth RTF'ing for a better idea of how this is done (btw, they are the same robots that were taught to "deceive" other robots about where the "food" is).
Plus, the video of the Predator/Prey stalemate is just epic!

As for the 3rd video (maze navigation), man, i would have blown these 1st gen robots to pieces before they could say Darwin!</htmltext>
<tokenext>worth RTF'ing for a better idea of how this is done ( btw , they are the same robots that were taught to " deceive " other robots about where the " food " is ) .
Plus , the video of the Predator/Prey stalemate is just epic !
As for the 3rd video ( maze navigation ) , man , i would have blown these 1st gen robots to pieces before they could say Darwin !</tokentext>
<sentencetext>worth RTF'ing for a better idea of how this is done (btw, they are the same robots that were taught to "deceive" other robots about where the "food" is).
Plus, the video of the Predator/Prey stalemate is just epic!
As for the 3rd video (maze navigation), man, i would have blown these 1st gen robots to pieces before they could say Darwin!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963626</id>
	<title>Confirms?</title>
	<author>Anonymous</author>
	<datestamp>1264876620000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>This in no way confirms that it would be too difficult for humans to build robots that posses higher A.I. traits, nor does it confirm that evolution is a better process than intelligent design.</p></htmltext>
<tokenext>This in no way confirms that it would be too difficult for humans to build robots that posses higher A.I .
traits , nor does it confirm that evolution is a better process than intelligent design .</tokentext>
<sentencetext>This in no way confirms that it would be too difficult for humans to build robots that posses higher A.I.
traits, nor does it confirm that evolution is a better process than intelligent design.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963670</id>
	<title>Re:A preemptive</title>
	<author>TheLink</author>
	<datestamp>1264876920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Lots. I personally prefer robots that aren't "strong AI".<br><br>Not so much because they may rule over us but more because we aren't already doing a great job with animals, so it'll be irresponsible to create a new class of creatures that we will likely enslave (in contrast I think enslaving "dumb machines" is fine- my car is less likely to feel anything than an ant, or even an amoeba).<br><br>Having the focus more on augmenting humans than emulating humans seems a better approach to me.<br><br>If we really need nonhuman intelligences that we don't understand so well (because they were evolved rather than were designed by us), we can get those in a pet shop.</htmltext>
<tokenext>Lots .
I personally prefer robots that are n't " strong AI " .Not so much because they may rule over us but more because we are n't already doing a great job with animals , so it 'll be irresponsible to create a new class of creatures that we will likely enslave ( in contrast I think enslaving " dumb machines " is fine- my car is less likely to feel anything than an ant , or even an amoeba ) .Having the focus more on augmenting humans than emulating humans seems a better approach to me.If we really need nonhuman intelligences that we do n't understand so well ( because they were evolved rather than were designed by us ) , we can get those in a pet shop .</tokentext>
<sentencetext>Lots.
I personally prefer robots that aren't "strong AI".Not so much because they may rule over us but more because we aren't already doing a great job with animals, so it'll be irresponsible to create a new class of creatures that we will likely enslave (in contrast I think enslaving "dumb machines" is fine- my car is less likely to feel anything than an ant, or even an amoeba).Having the focus more on augmenting humans than emulating humans seems a better approach to me.If we really need nonhuman intelligences that we don't understand so well (because they were evolved rather than were designed by us), we can get those in a pet shop.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963390</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963958</id>
	<title>How to Survive a Robot Uprising</title>
	<author>ISoldat53</author>
	<datestamp>1264878480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>DIY manual by Daniel H. Wilson on how to survive the coming uprising.</htmltext>
<tokenext>DIY manual by Daniel H. Wilson on how to survive the coming uprising .</tokentext>
<sentencetext>DIY manual by Daniel H. Wilson on how to survive the coming uprising.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963942</id>
	<title>Crossover</title>
	<author>Dachannien</author>
	<datestamp>1264878360000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>Definitely an interesting continuation of work being done by various groups over the past couple of decades.</p><p>But one thing to note is that crossover isn't especially useful in neural network evolution.  In early stages of evolution, it's really no better than random large perturbation of large swaths of the genome.  In later stages, it can actually decrease the speed of evolution toward high fitness genomes, because at least some of the time (particularly if there are multiple "species" in the population) crossover ends up being a random large perturbation which hinders the search of local fitness space by mutation; the rest of the time (when individuals from the same "species" are crossed) crossover is no better than mutation.</p><p>The reason for this is because the parameters of a neural network are not <i>functional</i>.  A section of the genome may correspond to a weight between neurons, but that weight doesn't have a specific function.  In biological organisms, each gene is transcribed/translated into a protein, and that protein may have a particular function within the cell.  If that gene is acquired by a descendant through crossover, the protein could serve the same (or a somewhat modified) role it served in its parent, even if the rest of the descendant's genome was acquired from the other parent.  But with artificial neural networks, the parameters were all evolved as parts of a whole, where each individual parameter has no function on its own, but the behavior emerges from having all of those parameters at the same time.</p><p>This could potentially be mitigated by the genome encoding scheme one uses, and of course, if the crossover rate is low enough, the ultimate effect would be small.</p></htmltext>
<tokenext>Definitely an interesting continuation of work being done by various groups over the past couple of decades.But one thing to note is that crossover is n't especially useful in neural network evolution .
In early stages of evolution , it 's really no better than random large perturbation of large swaths of the genome .
In later stages , it can actually decrease the speed of evolution toward high fitness genomes , because at least some of the time ( particularly if there are multiple " species " in the population ) crossover ends up being a random large perturbation which hinders the search of local fitness space by mutation ; the rest of the time ( when individuals from the same " species " are crossed ) crossover is no better than mutation.The reason for this is because the parameters of a neural network are not functional .
A section of the genome may correspond to a weight between neurons , but that weight does n't have a specific function .
In biological organisms , each gene is transcribed/translated into a protein , and that protein may have a particular function within the cell .
If that gene is acquired by a descendant through crossover , the protein could serve the same ( or a somewhat modified ) role it served in its parent , even if the rest of the descendant 's genome was acquired from the other parent .
But with artificial neural networks , the parameters were all evolved as parts of a whole , where each individual parameter has no function on its own , but the behavior emerges from having all of those parameters at the same time.This could potentially be mitigated by the genome encoding scheme one uses , and of course , if the crossover rate is low enough , the ultimate effect would be small .</tokentext>
<sentencetext>Definitely an interesting continuation of work being done by various groups over the past couple of decades.But one thing to note is that crossover isn't especially useful in neural network evolution.
In early stages of evolution, it's really no better than random large perturbation of large swaths of the genome.
In later stages, it can actually decrease the speed of evolution toward high fitness genomes, because at least some of the time (particularly if there are multiple "species" in the population) crossover ends up being a random large perturbation which hinders the search of local fitness space by mutation; the rest of the time (when individuals from the same "species" are crossed) crossover is no better than mutation.The reason for this is because the parameters of a neural network are not functional.
A section of the genome may correspond to a weight between neurons, but that weight doesn't have a specific function.
In biological organisms, each gene is transcribed/translated into a protein, and that protein may have a particular function within the cell.
If that gene is acquired by a descendant through crossover, the protein could serve the same (or a somewhat modified) role it served in its parent, even if the rest of the descendant's genome was acquired from the other parent.
But with artificial neural networks, the parameters were all evolved as parts of a whole, where each individual parameter has no function on its own, but the behavior emerges from having all of those parameters at the same time.This could potentially be mitigated by the genome encoding scheme one uses, and of course, if the crossover rate is low enough, the ultimate effect would be small.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964408</id>
	<title>1993</title>
	<author>Baldrson</author>
	<datestamp>1264881600000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext>The video was copyright 1993.
<p>
You don't need physical robots running around a maze to demonstrate AI.</p></htmltext>
<tokenext>The video was copyright 1993 .
You do n't need physical robots running around a maze to demonstrate AI .</tokentext>
<sentencetext>The video was copyright 1993.
You don't need physical robots running around a maze to demonstrate AI.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964494</id>
	<title>Well, that's one definition.</title>
	<author>gbutler69</author>
	<datestamp>1264882200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><b>orientate</b> <br>
 v : determine one's position with reference to another point<br>
 [syn: orient] [ant: disorient]</htmltext>
<tokenext>orientate v : determine one 's position with reference to another point [ syn : orient ] [ ant : disorient ]</tokentext>
<sentencetext>orientate 
 v : determine one's position with reference to another point
 [syn: orient] [ant: disorient]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963752</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963966</id>
	<title>The word is "orient", not "orientate"</title>
	<author>shking</author>
	<datestamp>1264878540000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext>The noun "<a href="http://dictionary.reference.com/browse/orientation" title="reference.com">orientation</a> [reference.com]" is derived from the verb "<a href="http://dictionary.reference.com/browse/orient" title="reference.com">orient</a> [reference.com]", not the other way around.</htmltext>
<tokenext>The noun " orientation [ reference.com ] " is derived from the verb " orient [ reference.com ] " , not the other way around .</tokentext>
<sentencetext>The noun "orientation [reference.com]" is derived from the verb "orient [reference.com]", not the other way around.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964586</id>
	<title>Re:I guess Isaac Asimov missed one...</title>
	<author>Anonymous</author>
	<datestamp>1264882980000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>That is weakly implied by the 1st law &amp; 3rd laws, because a robot should suppose that one day a human or itself may be in danger and require assistant from the robot it is about to harm. The interplay of the 3 laws is great, it's the sort of thing you could write books about !</p></htmltext>
<tokenext>That is weakly implied by the 1st law &amp; 3rd laws , because a robot should suppose that one day a human or itself may be in danger and require assistant from the robot it is about to harm .
The interplay of the 3 laws is great , it 's the sort of thing you could write books about !</tokentext>
<sentencetext>That is weakly implied by the 1st law &amp; 3rd laws, because a robot should suppose that one day a human or itself may be in danger and require assistant from the robot it is about to harm.
The interplay of the 3 laws is great, it's the sort of thing you could write books about !</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963560</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963602</id>
	<title>Re:paper was in PLoS Biology not PLoS One</title>
	<author>Anonymous</author>
	<datestamp>1264876500000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>It's a rather good article at any rate.  Would read again! (Actually, will have to read it a couple of times to understand it).

<br> <br>
And good job to who or whatever managed to pick this article out of the myriad of bloody stupid iPad stories we've been getting lately.</htmltext>
<tokenext>It 's a rather good article at any rate .
Would read again !
( Actually , will have to read it a couple of times to understand it ) .
And good job to who or whatever managed to pick this article out of the myriad of bloody stupid iPad stories we 've been getting lately .</tokentext>
<sentencetext>It's a rather good article at any rate.
Would read again!
(Actually, will have to read it a couple of times to understand it).
And good job to who or whatever managed to pick this article out of the myriad of bloody stupid iPad stories we've been getting lately.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963398</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_30_1555237_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30966764
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963398
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_30_1555237_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30979096
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963966
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_30_1555237_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30966680
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963966
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_30_1555237_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964022
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963602
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963398
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_30_1555237_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30966014
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963958
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_30_1555237_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30967774
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963966
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_30_1555237_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964494
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963752
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_30_1555237_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964936
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963390
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_30_1555237_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963886
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963446
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_30_1555237_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963866
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963398
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_30_1555237_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963670
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963390
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_30_1555237_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963730
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963390
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_30_1555237_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30966136
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963966
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_30_1555237_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964586
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963560
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_30_1555237_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963836
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963662
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_30_1555237_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30965018
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963942
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_30_1555237_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30999386
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963910
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963398
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_30_1555237_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30965736
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963942
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_30_1555237_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30966792
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963966
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_30_1555237_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30965772
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963446
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_30_1555237_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964980
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963390
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_30_1555237.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30967750
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_30_1555237.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963752
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964494
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_30_1555237.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30965044
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_30_1555237.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963398
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963866
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30966764
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963602
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964022
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963910
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30999386
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_30_1555237.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964408
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_30_1555237.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963390
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964936
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964980
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963670
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963730
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_30_1555237.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963560
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964586
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_30_1555237.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963662
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963836
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_30_1555237.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30964002
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_30_1555237.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963966
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30966792
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30966680
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30966136
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30979096
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30967774
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_30_1555237.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963958
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30966014
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_30_1555237.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963446
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30965772
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963886
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_30_1555237.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30963942
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30965736
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_30_1555237.30965018
</commentlist>
</conversation>
