<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article10_01_15_028201</id>
	<title>Robotics Prof Fears Rise of Military Robots</title>
	<author>timothy</author>
	<datestamp>1263569760000</datestamp>
	<htmltext>An anonymous reader writes <i>"Interesting video interview on silicon.com with Sheffield University's Noel Sharkey, professor of AI &amp; robotics. The white-haired prof talks state-of-the-robot-nation &mdash; discussing the most impressive robots currently clanking about on two-legs (hello Asimo) and who's doing the most <a href="http://www.silicon.com/technology/hardware/2010/01/13/video-artificial-intelligence-noel-sharkey-on-the-inexorable-rise-of-robots-39745322/">interesting things in UK robotics research</a> (something involving crickets apparently). He also voices concerns about military use of robots &mdash; suggesting it won't be long before armies are sending out fully autonomous killing machines."</i></htmltext>
<tokenext>An anonymous reader writes " Interesting video interview on silicon.com with Sheffield University 's Noel Sharkey , professor of AI &amp; robotics .
The white-haired prof talks state-of-the-robot-nation    discussing the most impressive robots currently clanking about on two-legs ( hello Asimo ) and who 's doing the most interesting things in UK robotics research ( something involving crickets apparently ) .
He also voices concerns about military use of robots    suggesting it wo n't be long before armies are sending out fully autonomous killing machines .
"</tokentext>
<sentencetext>An anonymous reader writes "Interesting video interview on silicon.com with Sheffield University's Noel Sharkey, professor of AI &amp; robotics.
The white-haired prof talks state-of-the-robot-nation — discussing the most impressive robots currently clanking about on two-legs (hello Asimo) and who's doing the most interesting things in UK robotics research (something involving crickets apparently).
He also voices concerns about military use of robots — suggesting it won't be long before armies are sending out fully autonomous killing machines.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779558</id>
	<title>Rogue Bolo</title>
	<author>Anonymous</author>
	<datestamp>1263574440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Okay, where's the tag?</p><p>---rgb</p></htmltext>
<tokenext>Okay , where 's the tag ? ---rgb</tokentext>
<sentencetext>Okay, where's the tag?---rgb</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775936</id>
	<title>Something involving crickets - or krikkits?</title>
	<author>jools33</author>
	<datestamp>1263498780000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext><p>This must be a typo - I'm sure UK robotic scientists are investigating krikkits and their imminent return to collect the ashes.</p></htmltext>
<tokenext>This must be a typo - I 'm sure UK robotic scientists are investigating krikkits and their imminent return to collect the ashes .</tokentext>
<sentencetext>This must be a typo - I'm sure UK robotic scientists are investigating krikkits and their imminent return to collect the ashes.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776194</id>
	<title>Re:"Friendly AI"</title>
	<author>Eivind</author>
	<datestamp>1263588540000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Agreed. Suffering own losses is pretty much the only thing that in practice limits the willingness of some leaders to wage war. It doesn't limit it all that much either, truth be told. But the endless rows of young american men coming home horisontally, DID play a major role in turning opinion in cases like the Vietnam War, and I think it'll do the same in Afghanistan and Iraq. The american public tire of sacrificing an endless row of their young, for issues and countries they don't really care -that- much about.</p><p>Already, technological differences means that the US can wage war with very low body-counts. Around 4500 US soldiers has been killed in Iraq, which compares favourably with the ~100K Iraqis who's been killed. (a 1:20 ratio, aproximately). I do not think the US public would've accepted the war (many of them don't accept it, even now) if the ratio to be expected had been closer to 1:1.</p><p>I can't help but wonder how many wars the next Bush will choose to engage in, if it can be done with a 1:100 ratio, or a 1:1000, or a 1:5000. If you could overthrow a major government, while losing -20- of your own men, would the reluctance to do so be smaller ? I think it would.</p></htmltext>
<tokenext>Agreed .
Suffering own losses is pretty much the only thing that in practice limits the willingness of some leaders to wage war .
It does n't limit it all that much either , truth be told .
But the endless rows of young american men coming home horisontally , DID play a major role in turning opinion in cases like the Vietnam War , and I think it 'll do the same in Afghanistan and Iraq .
The american public tire of sacrificing an endless row of their young , for issues and countries they do n't really care -that- much about.Already , technological differences means that the US can wage war with very low body-counts .
Around 4500 US soldiers has been killed in Iraq , which compares favourably with the ~ 100K Iraqis who 's been killed .
( a 1 : 20 ratio , aproximately ) .
I do not think the US public would 've accepted the war ( many of them do n't accept it , even now ) if the ratio to be expected had been closer to 1 : 1.I ca n't help but wonder how many wars the next Bush will choose to engage in , if it can be done with a 1 : 100 ratio , or a 1 : 1000 , or a 1 : 5000 .
If you could overthrow a major government , while losing -20- of your own men , would the reluctance to do so be smaller ?
I think it would .</tokentext>
<sentencetext>Agreed.
Suffering own losses is pretty much the only thing that in practice limits the willingness of some leaders to wage war.
It doesn't limit it all that much either, truth be told.
But the endless rows of young american men coming home horisontally, DID play a major role in turning opinion in cases like the Vietnam War, and I think it'll do the same in Afghanistan and Iraq.
The american public tire of sacrificing an endless row of their young, for issues and countries they don't really care -that- much about.Already, technological differences means that the US can wage war with very low body-counts.
Around 4500 US soldiers has been killed in Iraq, which compares favourably with the ~100K Iraqis who's been killed.
(a 1:20 ratio, aproximately).
I do not think the US public would've accepted the war (many of them don't accept it, even now) if the ratio to be expected had been closer to 1:1.I can't help but wonder how many wars the next Bush will choose to engage in, if it can be done with a 1:100 ratio, or a 1:1000, or a 1:5000.
If you could overthrow a major government, while losing -20- of your own men, would the reluctance to do so be smaller ?
I think it would.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779814</id>
	<title>Re:"Friendly AI"</title>
	<author>killmenow</author>
	<datestamp>1263575940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I think the better example for "why the morality of killing is situational" is this:<br>
<br>
Suppose somebody murdered your mother. Your mother's dead. That was wrong.<br>
<br>
Now, back up the clock. Suppose someone is trying to kill your mother but there's a struggle and in the course of defending herself she ends up killing them. They are dead. That was okay.<br>
<br>
All killing is not equal. It's generally accepted that killing in self defense is justified. One of the problems with Americans dropping bombs indiscriminately on houses in Afghanistan and suicide bombers blowing themselves up in malls is <b>they believe they are acting in self defense</b> so it's justifiable.</htmltext>
<tokenext>I think the better example for " why the morality of killing is situational " is this : Suppose somebody murdered your mother .
Your mother 's dead .
That was wrong .
Now , back up the clock .
Suppose someone is trying to kill your mother but there 's a struggle and in the course of defending herself she ends up killing them .
They are dead .
That was okay .
All killing is not equal .
It 's generally accepted that killing in self defense is justified .
One of the problems with Americans dropping bombs indiscriminately on houses in Afghanistan and suicide bombers blowing themselves up in malls is they believe they are acting in self defense so it 's justifiable .</tokentext>
<sentencetext>I think the better example for "why the morality of killing is situational" is this:

Suppose somebody murdered your mother.
Your mother's dead.
That was wrong.
Now, back up the clock.
Suppose someone is trying to kill your mother but there's a struggle and in the course of defending herself she ends up killing them.
They are dead.
That was okay.
All killing is not equal.
It's generally accepted that killing in self defense is justified.
One of the problems with Americans dropping bombs indiscriminately on houses in Afghanistan and suicide bombers blowing themselves up in malls is they believe they are acting in self defense so it's justifiable.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775664</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779426</id>
	<title>Just unplug the damn thing</title>
	<author>RogueWarrior65</author>
	<datestamp>1263573900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Herein lies the root problem of quantum-leap advances in technology.  Read enough science-fiction works and you start to realize that all that kick-ass technology only works if you have the power source to drive it.  Even Ayn Rand talked about it in Atlas Shrugged but may not have realized the significance of the concept.  The free-energy generator that powers Galt's Gulch is really the only way that society could function.  By the same token, have you ever seen Asimo's power supply?  Has the thing ever run for days without being plugged in?  Even the first Gulf War was heavily influenced by the supply chain for tank fuel.  So I don't see military robots being terribly useful unless you invent the uber power supply.  And it's quite possible that if the world has the uber power supply, there may be less war in general.</p></htmltext>
<tokenext>Herein lies the root problem of quantum-leap advances in technology .
Read enough science-fiction works and you start to realize that all that kick-ass technology only works if you have the power source to drive it .
Even Ayn Rand talked about it in Atlas Shrugged but may not have realized the significance of the concept .
The free-energy generator that powers Galt 's Gulch is really the only way that society could function .
By the same token , have you ever seen Asimo 's power supply ?
Has the thing ever run for days without being plugged in ?
Even the first Gulf War was heavily influenced by the supply chain for tank fuel .
So I do n't see military robots being terribly useful unless you invent the uber power supply .
And it 's quite possible that if the world has the uber power supply , there may be less war in general .</tokentext>
<sentencetext>Herein lies the root problem of quantum-leap advances in technology.
Read enough science-fiction works and you start to realize that all that kick-ass technology only works if you have the power source to drive it.
Even Ayn Rand talked about it in Atlas Shrugged but may not have realized the significance of the concept.
The free-energy generator that powers Galt's Gulch is really the only way that society could function.
By the same token, have you ever seen Asimo's power supply?
Has the thing ever run for days without being plugged in?
Even the first Gulf War was heavily influenced by the supply chain for tank fuel.
So I don't see military robots being terribly useful unless you invent the uber power supply.
And it's quite possible that if the world has the uber power supply, there may be less war in general.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776950</id>
	<title>Re:"Friendly AI"</title>
	<author>Anonymous</author>
	<datestamp>1263555840000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>War... war never changes.</p></htmltext>
<tokenext>War... war never changes .</tokentext>
<sentencetext>War... war never changes.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970</id>
	<title>"Friendly AI"</title>
	<author>Baldrson</author>
	<datestamp>1263487080000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext>This is one of the things that makes me think the concern about "friendly AI" is blown out of proportion.  The problem isn't making sure teh AI's are "friendly" -- its making sure the NI (natural intelligence) <i>owners</i> of the AI's are "friendly".
<p>
If half the effort spent on "friendly AI" were spent on examining the ownership of AI's, there might be some hope.</p></htmltext>
<tokenext>This is one of the things that makes me think the concern about " friendly AI " is blown out of proportion .
The problem is n't making sure teh AI 's are " friendly " -- its making sure the NI ( natural intelligence ) owners of the AI 's are " friendly " .
If half the effort spent on " friendly AI " were spent on examining the ownership of AI 's , there might be some hope .</tokentext>
<sentencetext>This is one of the things that makes me think the concern about "friendly AI" is blown out of proportion.
The problem isn't making sure teh AI's are "friendly" -- its making sure the NI (natural intelligence) owners of the AI's are "friendly".
If half the effort spent on "friendly AI" were spent on examining the ownership of AI's, there might be some hope.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30778596</id>
	<title>Re:"Friendly AI"</title>
	<author>Idiomatick</author>
	<datestamp>1263569280000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>We kill less now than we did back then... even with the ease of doing so we have today, thats a good thing.</htmltext>
<tokenext>We kill less now than we did back then... even with the ease of doing so we have today , thats a good thing .</tokentext>
<sentencetext>We kill less now than we did back then... even with the ease of doing so we have today, thats a good thing.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775632</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775084</id>
	<title>Look on the bright side</title>
	<author>Anonymous</author>
	<datestamp>1263487980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>"He also voices concerns about military use of robots &mdash; suggesting it won't be long before armies are sending out fully autonomous killing machines."</p></div><p>This Gloomy Gus overlooks the obvious. These "fully autonomous killin machines" - let's call them, oh I don't know, "killbots" - will almost certainly have a preset kill limit. So right there we'll have an easy way to stop them!</p></div>
	</htmltext>
<tokenext>" He also voices concerns about military use of robots    suggesting it wo n't be long before armies are sending out fully autonomous killing machines .
" This Gloomy Gus overlooks the obvious .
These " fully autonomous killin machines " - let 's call them , oh I do n't know , " killbots " - will almost certainly have a preset kill limit .
So right there we 'll have an easy way to stop them !</tokentext>
<sentencetext>"He also voices concerns about military use of robots — suggesting it won't be long before armies are sending out fully autonomous killing machines.
"This Gloomy Gus overlooks the obvious.
These "fully autonomous killin machines" - let's call them, oh I don't know, "killbots" - will almost certainly have a preset kill limit.
So right there we'll have an easy way to stop them!
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777964</id>
	<title>I've seen this before...</title>
	<author>eXFeLoN</author>
	<datestamp>1263565560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>We really should listen to this man.  Chances are he does know how to stop the erradication of our inferior species by the much superior robotic overlords that I thoroughyly support in their continued overthrowing efforts.</htmltext>
<tokenext>We really should listen to this man .
Chances are he does know how to stop the erradication of our inferior species by the much superior robotic overlords that I thoroughyly support in their continued overthrowing efforts .</tokentext>
<sentencetext>We really should listen to this man.
Chances are he does know how to stop the erradication of our inferior species by the much superior robotic overlords that I thoroughyly support in their continued overthrowing efforts.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777018</id>
	<title>The problem with this argument</title>
	<author>Anonymous</author>
	<datestamp>1263556440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The problem with your argument is that if it's adhered to then one of two would have to apply: either you would have to cheapen the domestic value of a life, or, you would have to cease taking part in wars.</p><p>Take the bombing of Serbia, widely supported by the European Left for example - should you prosecute the defense minister for murder at the first civilian death? Perhaps retroactively? A couple of billion in damages per civilian death?</p><p>In practice, life must become cheap in war. Otherwise all those for whom life isn't cheap would be powerless against those for whom it is cheap.</p></htmltext>
<tokenext>The problem with your argument is that if it 's adhered to then one of two would have to apply : either you would have to cheapen the domestic value of a life , or , you would have to cease taking part in wars.Take the bombing of Serbia , widely supported by the European Left for example - should you prosecute the defense minister for murder at the first civilian death ?
Perhaps retroactively ?
A couple of billion in damages per civilian death ? In practice , life must become cheap in war .
Otherwise all those for whom life is n't cheap would be powerless against those for whom it is cheap .</tokentext>
<sentencetext>The problem with your argument is that if it's adhered to then one of two would have to apply: either you would have to cheapen the domestic value of a life, or, you would have to cease taking part in wars.Take the bombing of Serbia, widely supported by the European Left for example - should you prosecute the defense minister for murder at the first civilian death?
Perhaps retroactively?
A couple of billion in damages per civilian death?In practice, life must become cheap in war.
Otherwise all those for whom life isn't cheap would be powerless against those for whom it is cheap.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775664</id>
	<title>Re:"Friendly AI"</title>
	<author>stdarg</author>
	<datestamp>1263494760000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>We honestly believe there's a distinction between the two. "Americans didn't set out to kill civilians" war hawks will huff. Yes, but they're still dead, aren't they?</p></div><p>Are you serious? So to take a personal example, say somebody murdered your mother. How would you want that person punished? Many people would call for the death penalty. Now what if someone killed your mother completely by accident... say your mom ran a red light and got hit by someone. She's still dead, isn't she?</p></div>
	</htmltext>
<tokenext>We honestly believe there 's a distinction between the two .
" Americans did n't set out to kill civilians " war hawks will huff .
Yes , but they 're still dead , are n't they ? Are you serious ?
So to take a personal example , say somebody murdered your mother .
How would you want that person punished ?
Many people would call for the death penalty .
Now what if someone killed your mother completely by accident... say your mom ran a red light and got hit by someone .
She 's still dead , is n't she ?</tokentext>
<sentencetext>We honestly believe there's a distinction between the two.
"Americans didn't set out to kill civilians" war hawks will huff.
Yes, but they're still dead, aren't they?Are you serious?
So to take a personal example, say somebody murdered your mother.
How would you want that person punished?
Many people would call for the death penalty.
Now what if someone killed your mother completely by accident... say your mom ran a red light and got hit by someone.
She's still dead, isn't she?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775714</id>
	<title>No Evil Robots!</title>
	<author>Anonymous</author>
	<datestamp>1263495420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>One of my former profs talks about this from time to time.<br>I always thought it was a nice idea.</p><p>http://www.cs.sfu.ca/~vaughan/noevilrobots.html</p></htmltext>
<tokenext>One of my former profs talks about this from time to time.I always thought it was a nice idea.http : //www.cs.sfu.ca/ ~ vaughan/noevilrobots.html</tokentext>
<sentencetext>One of my former profs talks about this from time to time.I always thought it was a nice idea.http://www.cs.sfu.ca/~vaughan/noevilrobots.html</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776380</id>
	<title>I agree with /.</title>
	<author>Anonymous</author>
	<datestamp>1263548040000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>With our current understanding of AI, I don't believe we will have to worry about the machines, but the people behind them. For a "Terminator" scenario to take place we much first find a way around the Chinese Room problem [<a href="http://en.wikipedia.org/wiki/Chinese\_room" title="wikipedia.org" rel="nofollow">wikipedia</a> [wikipedia.org]]. If that is even possible on current hardware we must then the machine would no longer be classified Strong AI [<a href="http://en.wikipedia.org/wiki/Chinese\_room#Strong\_AI" title="wikipedia.org" rel="nofollow"> wikipedia</a> [wikipedia.org]] and will become something else. From a philosophical, and for me moral, stand point sending machines such as those to war would be just tragic as human loss.
<br>
<br>
There are numerous pros and cons to robotic warfare, the biggest con is that it's still war with or without human loss...</htmltext>
<tokenext>With our current understanding of AI , I do n't believe we will have to worry about the machines , but the people behind them .
For a " Terminator " scenario to take place we much first find a way around the Chinese Room problem [ wikipedia [ wikipedia.org ] ] .
If that is even possible on current hardware we must then the machine would no longer be classified Strong AI [ wikipedia [ wikipedia.org ] ] and will become something else .
From a philosophical , and for me moral , stand point sending machines such as those to war would be just tragic as human loss .
There are numerous pros and cons to robotic warfare , the biggest con is that it 's still war with or without human loss.. .</tokentext>
<sentencetext>With our current understanding of AI, I don't believe we will have to worry about the machines, but the people behind them.
For a "Terminator" scenario to take place we much first find a way around the Chinese Room problem [wikipedia [wikipedia.org]].
If that is even possible on current hardware we must then the machine would no longer be classified Strong AI [ wikipedia [wikipedia.org]] and will become something else.
From a philosophical, and for me moral, stand point sending machines such as those to war would be just tragic as human loss.
There are numerous pros and cons to robotic warfare, the biggest con is that it's still war with or without human loss...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776848</id>
	<title>Re:"Friendly AI"</title>
	<author>ultranova</author>
	<datestamp>1263553800000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><blockquote><div><p>That's just it -- human nature never changes. The general can order genocide but it's up to the soldiers to carry it out. The My Lai Massacre was stopped by a helicopter pilot who put his bird between the civilians and "told his crew that if the U.S. soldiers shot at the Vietnamese while he was trying to get them out of the bunker that they were to open fire at these soldiers."</p></div> </blockquote><p>Yes. On the other hand, the reason that My Lai happened in the first place is that people had been under constant stress and simply snapped. Had the entire war been fought with robotic soldiers, and instead of body bags only scrap metal had been sent back home, would the general had ordered a genocide? I doubt it, for there would have been no emotional involvement, and no stress and bottled-up hatred.</p><p>Finally, if you're a soldier patrolling a conquered city, and you see someone seemingly unarmed running towards you, it could be a suicide bomber about to blow you up, or it could simply be someone running. You risk killing an innocent or you risk getting killed. On the other hand, if the patrol is robotic, it can simply wait; if the robot is blown up, no big deal, the factory has already built three new ones to replace it by the time the last pieces hit the ground, so you can err on the side of not shooting unless it's really obvious it's an enemy.</p><p>Robot infantry removes human emotions from the war, but that's not necessarily a bad thing.</p></div>
	</htmltext>
<tokenext>That 's just it -- human nature never changes .
The general can order genocide but it 's up to the soldiers to carry it out .
The My Lai Massacre was stopped by a helicopter pilot who put his bird between the civilians and " told his crew that if the U.S. soldiers shot at the Vietnamese while he was trying to get them out of the bunker that they were to open fire at these soldiers .
" Yes .
On the other hand , the reason that My Lai happened in the first place is that people had been under constant stress and simply snapped .
Had the entire war been fought with robotic soldiers , and instead of body bags only scrap metal had been sent back home , would the general had ordered a genocide ?
I doubt it , for there would have been no emotional involvement , and no stress and bottled-up hatred.Finally , if you 're a soldier patrolling a conquered city , and you see someone seemingly unarmed running towards you , it could be a suicide bomber about to blow you up , or it could simply be someone running .
You risk killing an innocent or you risk getting killed .
On the other hand , if the patrol is robotic , it can simply wait ; if the robot is blown up , no big deal , the factory has already built three new ones to replace it by the time the last pieces hit the ground , so you can err on the side of not shooting unless it 's really obvious it 's an enemy.Robot infantry removes human emotions from the war , but that 's not necessarily a bad thing .</tokentext>
<sentencetext>That's just it -- human nature never changes.
The general can order genocide but it's up to the soldiers to carry it out.
The My Lai Massacre was stopped by a helicopter pilot who put his bird between the civilians and "told his crew that if the U.S. soldiers shot at the Vietnamese while he was trying to get them out of the bunker that they were to open fire at these soldiers.
" Yes.
On the other hand, the reason that My Lai happened in the first place is that people had been under constant stress and simply snapped.
Had the entire war been fought with robotic soldiers, and instead of body bags only scrap metal had been sent back home, would the general had ordered a genocide?
I doubt it, for there would have been no emotional involvement, and no stress and bottled-up hatred.Finally, if you're a soldier patrolling a conquered city, and you see someone seemingly unarmed running towards you, it could be a suicide bomber about to blow you up, or it could simply be someone running.
You risk killing an innocent or you risk getting killed.
On the other hand, if the patrol is robotic, it can simply wait; if the robot is blown up, no big deal, the factory has already built three new ones to replace it by the time the last pieces hit the ground, so you can err on the side of not shooting unless it's really obvious it's an enemy.Robot infantry removes human emotions from the war, but that's not necessarily a bad thing.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776020</id>
	<title>This is a great idea...</title>
	<author>Anonymous</author>
	<datestamp>1263586140000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>For example if we want to charge a group of drone with indiscriminate fire we can simply jail the developer, instead of a section of marines!<br>More lives are saved =)</p></htmltext>
<tokenext>For example if we want to charge a group of drone with indiscriminate fire we can simply jail the developer , instead of a section of marines ! More lives are saved = )</tokentext>
<sentencetext>For example if we want to charge a group of drone with indiscriminate fire we can simply jail the developer, instead of a section of marines!More lives are saved =)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30778342</id>
	<title>"It'll take decades"</title>
	<author>Anonymous</author>
	<datestamp>1263567900000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>On a bit of a realted note.  I remember a bunch of news stories/documentaries a while back (2002-2003 timeframe) of how they were doing the initial tests of armed UAV's and all of the "military experts" were so addiment that "It'll be decades before these things see actual combat".  Now we know that they they were sent on their initial kill missions around what, 2006?, 2007?.  To all who would say "we don't have to worry about this for a long time" you might want to look at some recent history.</p></htmltext>
<tokenext>On a bit of a realted note .
I remember a bunch of news stories/documentaries a while back ( 2002-2003 timeframe ) of how they were doing the initial tests of armed UAV 's and all of the " military experts " were so addiment that " It 'll be decades before these things see actual combat " .
Now we know that they they were sent on their initial kill missions around what , 2006 ? , 2007 ? .
To all who would say " we do n't have to worry about this for a long time " you might want to look at some recent history .</tokentext>
<sentencetext>On a bit of a realted note.
I remember a bunch of news stories/documentaries a while back (2002-2003 timeframe) of how they were doing the initial tests of armed UAV's and all of the "military experts" were so addiment that "It'll be decades before these things see actual combat".
Now we know that they they were sent on their initial kill missions around what, 2006?, 2007?.
To all who would say "we don't have to worry about this for a long time" you might want to look at some recent history.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30781794</id>
	<title>Re:skynet</title>
	<author>rantingkitten</author>
	<datestamp>1263583920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Yeah, because tags are useful for <b>anything</b> around here.  Why do we even have them, and why do people worry so much about them?  <br>
<br>
I've disabled them and haven't missed a damn thing other than inane gibberish at the bottom of each story saying "yes", "no", "!nuts", and other pointless crap.</htmltext>
<tokenext>Yeah , because tags are useful for anything around here .
Why do we even have them , and why do people worry so much about them ?
I 've disabled them and have n't missed a damn thing other than inane gibberish at the bottom of each story saying " yes " , " no " , " ! nuts " , and other pointless crap .</tokentext>
<sentencetext>Yeah, because tags are useful for anything around here.
Why do we even have them, and why do people worry so much about them?
I've disabled them and haven't missed a damn thing other than inane gibberish at the bottom of each story saying "yes", "no", "!nuts", and other pointless crap.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774966</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775564</id>
	<title>that is alright</title>
	<author>bongey</author>
	<datestamp>1263493380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>That is alright we will just create an army of clones, oh wrong thread.</htmltext>
<tokenext>That is alright we will just create an army of clones , oh wrong thread .</tokentext>
<sentencetext>That is alright we will just create an army of clones, oh wrong thread.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779716</id>
	<title>To Quote Patton</title>
	<author>jimbobborg</author>
	<datestamp>1263575340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>"No bastard ever won a war by dying for his country. He won it by making the other poor dumb bastard die for his country."</p></htmltext>
<tokenext>" No bastard ever won a war by dying for his country .
He won it by making the other poor dumb bastard die for his country .
"</tokentext>
<sentencetext>"No bastard ever won a war by dying for his country.
He won it by making the other poor dumb bastard die for his country.
"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775548</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775082</id>
	<title>Wernstrom Killbots...</title>
	<author>Anonymous</author>
	<datestamp>1263487920000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext><p>Do these killbots come with a machine gun AND Lotus Notes?</p></htmltext>
<tokenext>Do these killbots come with a machine gun AND Lotus Notes ?</tokentext>
<sentencetext>Do these killbots come with a machine gun AND Lotus Notes?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776916</id>
	<title>I have doubts</title>
	<author>Rothron the Wise</author>
	<datestamp>1263555120000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>I suspect it will be too easy to create effective countermeasures to make military robots a real threat.

After all since the robots are identical the same countermeasure will be effective for all of them. They will also have
simple sensors which are easier to trick than human soldiers.</htmltext>
<tokenext>I suspect it will be too easy to create effective countermeasures to make military robots a real threat .
After all since the robots are identical the same countermeasure will be effective for all of them .
They will also have simple sensors which are easier to trick than human soldiers .</tokentext>
<sentencetext>I suspect it will be too easy to create effective countermeasures to make military robots a real threat.
After all since the robots are identical the same countermeasure will be effective for all of them.
They will also have
simple sensors which are easier to trick than human soldiers.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776042</id>
	<title>Re:"Friendly AI"</title>
	<author>Anonymous</author>
	<datestamp>1263586560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>First off, terrible example:  if my mom ran a red light, it's partially her fault.  It's not really your fault for going into a building that was going to be bombed (unless you're a human shield or something).  Let's say the man who hit my mom was a drunk driver.  And this wasn't his first fatal accident.</p><p>I don't think the death penalty is ever appropriate.  I'm also not convinced there's a huge difference between *a certain class of* accidental homicide and intentional homicide.</p><p>With intentional homicide, a big part of the reason we punish very harshly is we're afraid that the sort of person who'll do this is liable to decide to do it again.  With accidental homicide?  Was it avoidable?  If a person is criminally negligent, we might again be just as afraid they'll do it again.  Like a recidivist drunk driver, who ultimately shows no more regard for human life than the intentional murderer, is just as dangerous and just as despicable.</p><p>A pattern of bombing buildings full of civilians, even by accident, is just as horrifying as somebody doing as many on purpose.  If you want lenience, then the accidents have to STOP HAPPENING.  No accidents in a few decades, say.</p><p>But what happens is a cost-benefit analysis.  "We can win with 0\% accidents, but it would take a kabillion dollars and 500 million soldiers and risks Y and Z.  Or we could go with what we know, and win, and have an accident X\% of the time, which is regrettable but acceptable.  Or we could lose."  Obviously a bit more complicated than that, but there it is.  And as long as option b is chosen, then the difference between doing it intentionally and doing it by an accident that we chose to risk is academic.</p><p>And maybe option B is the right choice, the best of all worlds.  STILL doesn't make the other side feel any better.</p></htmltext>
<tokenext>First off , terrible example : if my mom ran a red light , it 's partially her fault .
It 's not really your fault for going into a building that was going to be bombed ( unless you 're a human shield or something ) .
Let 's say the man who hit my mom was a drunk driver .
And this was n't his first fatal accident.I do n't think the death penalty is ever appropriate .
I 'm also not convinced there 's a huge difference between * a certain class of * accidental homicide and intentional homicide.With intentional homicide , a big part of the reason we punish very harshly is we 're afraid that the sort of person who 'll do this is liable to decide to do it again .
With accidental homicide ?
Was it avoidable ?
If a person is criminally negligent , we might again be just as afraid they 'll do it again .
Like a recidivist drunk driver , who ultimately shows no more regard for human life than the intentional murderer , is just as dangerous and just as despicable.A pattern of bombing buildings full of civilians , even by accident , is just as horrifying as somebody doing as many on purpose .
If you want lenience , then the accidents have to STOP HAPPENING .
No accidents in a few decades , say.But what happens is a cost-benefit analysis .
" We can win with 0 \ % accidents , but it would take a kabillion dollars and 500 million soldiers and risks Y and Z. Or we could go with what we know , and win , and have an accident X \ % of the time , which is regrettable but acceptable .
Or we could lose .
" Obviously a bit more complicated than that , but there it is .
And as long as option b is chosen , then the difference between doing it intentionally and doing it by an accident that we chose to risk is academic.And maybe option B is the right choice , the best of all worlds .
STILL does n't make the other side feel any better .</tokentext>
<sentencetext>First off, terrible example:  if my mom ran a red light, it's partially her fault.
It's not really your fault for going into a building that was going to be bombed (unless you're a human shield or something).
Let's say the man who hit my mom was a drunk driver.
And this wasn't his first fatal accident.I don't think the death penalty is ever appropriate.
I'm also not convinced there's a huge difference between *a certain class of* accidental homicide and intentional homicide.With intentional homicide, a big part of the reason we punish very harshly is we're afraid that the sort of person who'll do this is liable to decide to do it again.
With accidental homicide?
Was it avoidable?
If a person is criminally negligent, we might again be just as afraid they'll do it again.
Like a recidivist drunk driver, who ultimately shows no more regard for human life than the intentional murderer, is just as dangerous and just as despicable.A pattern of bombing buildings full of civilians, even by accident, is just as horrifying as somebody doing as many on purpose.
If you want lenience, then the accidents have to STOP HAPPENING.
No accidents in a few decades, say.But what happens is a cost-benefit analysis.
"We can win with 0\% accidents, but it would take a kabillion dollars and 500 million soldiers and risks Y and Z.  Or we could go with what we know, and win, and have an accident X\% of the time, which is regrettable but acceptable.
Or we could lose.
"  Obviously a bit more complicated than that, but there it is.
And as long as option b is chosen, then the difference between doing it intentionally and doing it by an accident that we chose to risk is academic.And maybe option B is the right choice, the best of all worlds.
STILL doesn't make the other side feel any better.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775664</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777610</id>
	<title>Re:"Friendly AI"</title>
	<author>dyfet</author>
	<datestamp>1263562440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Actually the whole purpose of modern military training is precisely to psychologically condition people to kill.  Back in the "old" days (ww2), in a typical battle only 15\% might actively participate.  By the time of the Mai Li massacre the use of psychological conditioning had already brought kill rates up to 85\%.</p><p>Besides the ease of which it enables civilian massacres and war crimes, from Mail Li to Falluja, the other problem with psychological conditioning is that soldiers are discharged cheaply, but there is no immediate "off" switch to such training.  But governments care not, for it is not part of the military budget to also return people sanely to civilian life.  Some end up in violent incidents and hence the result is damage in the civilian population, but many choose to kill themselves.  Just as one example, the suicide death rate for U.S. soldiers who had served in Iraq is actually higher than the battlefield casualty rate!  That is more soldiers kill themselves than die in battle today.</p><p>The problem is often not that soldiers will not kill civilians, such as to protect an unpopular leader.  The problem rather is that most nations cannot afford the psychological conditioning and training needed to maintain a force that will, certainly on a large scale.  This was the dilemma faced for example by the Chinese government at Tiananmen Square, who back then did not have the resources to condition a military that completely, though eventually they found units from the countryside who had no connection to the region that would kill.</p><p>War robotics can however do more than simply remove people (who may still control them) from combat.  It can be used to remove people from knowledge of who is being killed and why, particularly useful when using such troops in local suppression.  Imagine if they are told they are fighting a terrorist group in the midst of a city in Afghanistan, with all the audio falsely altered so the language people are speaking no longer sounds English, and the video feeds scrubbed of other identifying features, when in reality they are controlling robots suppressing a domestic protest in Detroit?  Of course, if people are no longer needed to control them, then even this issue is eliminated.  In this, I agree an AI that follows orders without conscience would be the very best friend of a modern police state.</p></htmltext>
<tokenext>Actually the whole purpose of modern military training is precisely to psychologically condition people to kill .
Back in the " old " days ( ww2 ) , in a typical battle only 15 \ % might actively participate .
By the time of the Mai Li massacre the use of psychological conditioning had already brought kill rates up to 85 \ % .Besides the ease of which it enables civilian massacres and war crimes , from Mail Li to Falluja , the other problem with psychological conditioning is that soldiers are discharged cheaply , but there is no immediate " off " switch to such training .
But governments care not , for it is not part of the military budget to also return people sanely to civilian life .
Some end up in violent incidents and hence the result is damage in the civilian population , but many choose to kill themselves .
Just as one example , the suicide death rate for U.S. soldiers who had served in Iraq is actually higher than the battlefield casualty rate !
That is more soldiers kill themselves than die in battle today.The problem is often not that soldiers will not kill civilians , such as to protect an unpopular leader .
The problem rather is that most nations can not afford the psychological conditioning and training needed to maintain a force that will , certainly on a large scale .
This was the dilemma faced for example by the Chinese government at Tiananmen Square , who back then did not have the resources to condition a military that completely , though eventually they found units from the countryside who had no connection to the region that would kill.War robotics can however do more than simply remove people ( who may still control them ) from combat .
It can be used to remove people from knowledge of who is being killed and why , particularly useful when using such troops in local suppression .
Imagine if they are told they are fighting a terrorist group in the midst of a city in Afghanistan , with all the audio falsely altered so the language people are speaking no longer sounds English , and the video feeds scrubbed of other identifying features , when in reality they are controlling robots suppressing a domestic protest in Detroit ?
Of course , if people are no longer needed to control them , then even this issue is eliminated .
In this , I agree an AI that follows orders without conscience would be the very best friend of a modern police state .</tokentext>
<sentencetext>Actually the whole purpose of modern military training is precisely to psychologically condition people to kill.
Back in the "old" days (ww2), in a typical battle only 15\% might actively participate.
By the time of the Mai Li massacre the use of psychological conditioning had already brought kill rates up to 85\%.Besides the ease of which it enables civilian massacres and war crimes, from Mail Li to Falluja, the other problem with psychological conditioning is that soldiers are discharged cheaply, but there is no immediate "off" switch to such training.
But governments care not, for it is not part of the military budget to also return people sanely to civilian life.
Some end up in violent incidents and hence the result is damage in the civilian population, but many choose to kill themselves.
Just as one example, the suicide death rate for U.S. soldiers who had served in Iraq is actually higher than the battlefield casualty rate!
That is more soldiers kill themselves than die in battle today.The problem is often not that soldiers will not kill civilians, such as to protect an unpopular leader.
The problem rather is that most nations cannot afford the psychological conditioning and training needed to maintain a force that will, certainly on a large scale.
This was the dilemma faced for example by the Chinese government at Tiananmen Square, who back then did not have the resources to condition a military that completely, though eventually they found units from the countryside who had no connection to the region that would kill.War robotics can however do more than simply remove people (who may still control them) from combat.
It can be used to remove people from knowledge of who is being killed and why, particularly useful when using such troops in local suppression.
Imagine if they are told they are fighting a terrorist group in the midst of a city in Afghanistan, with all the audio falsely altered so the language people are speaking no longer sounds English, and the video feeds scrubbed of other identifying features, when in reality they are controlling robots suppressing a domestic protest in Detroit?
Of course, if people are no longer needed to control them, then even this issue is eliminated.
In this, I agree an AI that follows orders without conscience would be the very best friend of a modern police state.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775320</id>
	<title>Re:"Friendly AI"</title>
	<author>evil\_aar0n</author>
	<datestamp>1263490620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>How 'bout "biological intelligence" instead?</p><p>And if the US military is involved, is there any hope?</p></htmltext>
<tokenext>How 'bout " biological intelligence " instead ? And if the US military is involved , is there any hope ?</tokentext>
<sentencetext>How 'bout "biological intelligence" instead?And if the US military is involved, is there any hope?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775998</id>
	<title>What do they call this type of robot?</title>
	<author>CrazyJim1</author>
	<datestamp>1263585780000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>This robot is: A humanoid robot controlled entirely by the movements and actions of a live person.

I know we don't have the technology for a robot to keep its balance well enough on two legs, but we are there or at least close for controlling a skeleton in 3d.

What would a robot like this be called?  I'm sure I'm not the first to think about it, so I figure there has to be a name for it.</htmltext>
<tokenext>This robot is : A humanoid robot controlled entirely by the movements and actions of a live person .
I know we do n't have the technology for a robot to keep its balance well enough on two legs , but we are there or at least close for controlling a skeleton in 3d .
What would a robot like this be called ?
I 'm sure I 'm not the first to think about it , so I figure there has to be a name for it .</tokentext>
<sentencetext>This robot is: A humanoid robot controlled entirely by the movements and actions of a live person.
I know we don't have the technology for a robot to keep its balance well enough on two legs, but we are there or at least close for controlling a skeleton in 3d.
What would a robot like this be called?
I'm sure I'm not the first to think about it, so I figure there has to be a name for it.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30780146</id>
	<title>Psiops fuzzbombs</title>
	<author>grikdog</author>
	<datestamp>1263577560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Just <i>blathering</i> about this stuff is enough to send committed, anointed-by-Allah, jihadi martyrs to bed, pulling their prayer rugs over their heads?  IDTS.

On the other hand, I'll worry about invincible killerbots when we use them in airports instead of junior G-Men TSA knuckleheads.
<br> <br>
My personal vision of kickass AI minelets, is a swarm of little dodecahedrons that roll around where they've been dropped, that unfold a small set of sun-following venetian blinds that gather power and use dragonfly-style neural net vision to detect motion and identify foe as not-friend.  (Friends have the AES-encrypted countersign of the day.)  Lay those down in a circle thirty yards deep and a hundred yards wide, and you have a nasty defensive perimeter serving the same function as a Roman palisade.  This stuff doesn't have to be high tech.</htmltext>
<tokenext>Just blathering about this stuff is enough to send committed , anointed-by-Allah , jihadi martyrs to bed , pulling their prayer rugs over their heads ?
IDTS . On the other hand , I 'll worry about invincible killerbots when we use them in airports instead of junior G-Men TSA knuckleheads .
My personal vision of kickass AI minelets , is a swarm of little dodecahedrons that roll around where they 've been dropped , that unfold a small set of sun-following venetian blinds that gather power and use dragonfly-style neural net vision to detect motion and identify foe as not-friend .
( Friends have the AES-encrypted countersign of the day .
) Lay those down in a circle thirty yards deep and a hundred yards wide , and you have a nasty defensive perimeter serving the same function as a Roman palisade .
This stuff does n't have to be high tech .</tokentext>
<sentencetext>Just blathering about this stuff is enough to send committed, anointed-by-Allah, jihadi martyrs to bed, pulling their prayer rugs over their heads?
IDTS.

On the other hand, I'll worry about invincible killerbots when we use them in airports instead of junior G-Men TSA knuckleheads.
My personal vision of kickass AI minelets, is a swarm of little dodecahedrons that roll around where they've been dropped, that unfold a small set of sun-following venetian blinds that gather power and use dragonfly-style neural net vision to detect motion and identify foe as not-friend.
(Friends have the AES-encrypted countersign of the day.
)  Lay those down in a circle thirty yards deep and a hundred yards wide, and you have a nasty defensive perimeter serving the same function as a Roman palisade.
This stuff doesn't have to be high tech.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777886</id>
	<title>Re:What do they call this type of robot?</title>
	<author>Tanuki64</author>
	<datestamp>1263565140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>A surrogate.<nobr> <wbr></nobr>;-)
<a href="http://www.imdb.com/title/tt0986263/" title="imdb.com">http://www.imdb.com/title/tt0986263/</a> [imdb.com]</htmltext>
<tokenext>A surrogate .
; - ) http : //www.imdb.com/title/tt0986263/ [ imdb.com ]</tokentext>
<sentencetext>A surrogate.
;-)
http://www.imdb.com/title/tt0986263/ [imdb.com]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775998</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775628</id>
	<title>Liberation of Tibet</title>
	<author>Anonymous</author>
	<datestamp>1263494220000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Mechanized soldiers can be useful.
<p>
Consider the following scenario.
</p><p>
<b>
In the early morning of December 7, 2041, one million mechanized soldiers arise from the receding tide and onto the shores of China.  The robots march relentlessly westward, killing all Chinese soldiers in their path.  The final destination is Tibet.
</b></p><p><b>
In the words of that old Negro spiritual, "Tibet, free at last!  Buddha, Almight!  Free at last!"
</b></p></htmltext>
<tokenext>Mechanized soldiers can be useful .
Consider the following scenario .
In the early morning of December 7 , 2041 , one million mechanized soldiers arise from the receding tide and onto the shores of China .
The robots march relentlessly westward , killing all Chinese soldiers in their path .
The final destination is Tibet .
In the words of that old Negro spiritual , " Tibet , free at last !
Buddha , Almight !
Free at last !
"</tokentext>
<sentencetext>Mechanized soldiers can be useful.
Consider the following scenario.
In the early morning of December 7, 2041, one million mechanized soldiers arise from the receding tide and onto the shores of China.
The robots march relentlessly westward, killing all Chinese soldiers in their path.
The final destination is Tibet.
In the words of that old Negro spiritual, "Tibet, free at last!
Buddha, Almight!
Free at last!
"
</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774966</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776776</id>
	<title>Re:"Friendly AI"</title>
	<author>Anonymous</author>
	<datestamp>1263552900000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>Of course, for thousands of years of recorded history, people <b>did</b> kill each other en masse at arm's length.  Alexander's soldiers may have been more honest about what they were doing than somebody today sitting in a bunker pressing a button and killing people on the other side of the globe, but they were no less bloodthirsty.</p></div><p>Actually, Alexander's (well, Phillip's) phalanges fought at 10-foot polearm's length. It was a great innovation back then in olden days... and it also shows that ancient warriors chose to be "less honest about what they were doing" if they had that choice. When you think about it, through entire history of war, each improvement in distancing your soldiers from their own kill zone gave your side strategic advantage. However, the problem we see today is that distance has become so great that you are not sure who and if anyone do you kill. Combined with relaxation of conscience and of sense of responsibility, it is an invitation to large scale disasters. Of course, humanitarian laws exist, but those who have to rely on them (and/or mercy of their enemies) are in deep, deep trouble. As necessity will have it, soon we will find out about theoretical and practical limitations of robotic weapons.</p></div>
	</htmltext>
<tokenext>Of course , for thousands of years of recorded history , people did kill each other en masse at arm 's length .
Alexander 's soldiers may have been more honest about what they were doing than somebody today sitting in a bunker pressing a button and killing people on the other side of the globe , but they were no less bloodthirsty.Actually , Alexander 's ( well , Phillip 's ) phalanges fought at 10-foot polearm 's length .
It was a great innovation back then in olden days... and it also shows that ancient warriors chose to be " less honest about what they were doing " if they had that choice .
When you think about it , through entire history of war , each improvement in distancing your soldiers from their own kill zone gave your side strategic advantage .
However , the problem we see today is that distance has become so great that you are not sure who and if anyone do you kill .
Combined with relaxation of conscience and of sense of responsibility , it is an invitation to large scale disasters .
Of course , humanitarian laws exist , but those who have to rely on them ( and/or mercy of their enemies ) are in deep , deep trouble .
As necessity will have it , soon we will find out about theoretical and practical limitations of robotic weapons .</tokentext>
<sentencetext>Of course, for thousands of years of recorded history, people did kill each other en masse at arm's length.
Alexander's soldiers may have been more honest about what they were doing than somebody today sitting in a bunker pressing a button and killing people on the other side of the globe, but they were no less bloodthirsty.Actually, Alexander's (well, Phillip's) phalanges fought at 10-foot polearm's length.
It was a great innovation back then in olden days... and it also shows that ancient warriors chose to be "less honest about what they were doing" if they had that choice.
When you think about it, through entire history of war, each improvement in distancing your soldiers from their own kill zone gave your side strategic advantage.
However, the problem we see today is that distance has become so great that you are not sure who and if anyone do you kill.
Combined with relaxation of conscience and of sense of responsibility, it is an invitation to large scale disasters.
Of course, humanitarian laws exist, but those who have to rely on them (and/or mercy of their enemies) are in deep, deep trouble.
As necessity will have it, soon we will find out about theoretical and practical limitations of robotic weapons.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775632</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775290</id>
	<title>Re:"Friendly AI"</title>
	<author>hyperion2010</author>
	<datestamp>1263490380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Haha, right now we're having a hell of a time getting other human beings to be friendy and they are quite a bit smarter and more dangerous than any robot, so until the meat is less dangerous than the quartz I think it is a grand waist of resources to try and make robots "friendly."  To tell you the truth there are certain things that real intelligence should be unfriendly towards.</p></htmltext>
<tokenext>Haha , right now we 're having a hell of a time getting other human beings to be friendy and they are quite a bit smarter and more dangerous than any robot , so until the meat is less dangerous than the quartz I think it is a grand waist of resources to try and make robots " friendly .
" To tell you the truth there are certain things that real intelligence should be unfriendly towards .</tokentext>
<sentencetext>Haha, right now we're having a hell of a time getting other human beings to be friendy and they are quite a bit smarter and more dangerous than any robot, so until the meat is less dangerous than the quartz I think it is a grand waist of resources to try and make robots "friendly.
"  To tell you the truth there are certain things that real intelligence should be unfriendly towards.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775586</id>
	<title>Of course he's scared</title>
	<author>Anonymous</author>
	<datestamp>1263493800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Who do you think will be first against the wall when our new robotic overlords take control?</p><p>Did I say overlords?  I meant protectors.</p></htmltext>
<tokenext>Who do you think will be first against the wall when our new robotic overlords take control ? Did I say overlords ?
I meant protectors .</tokentext>
<sentencetext>Who do you think will be first against the wall when our new robotic overlords take control?Did I say overlords?
I meant protectors.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779520</id>
	<title>Re:"Friendly AI"</title>
	<author>Anonymous</author>
	<datestamp>1263574320000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>'Anonymous Coward' - hehe! I just don't want to register on yet another damned site...</p><p>Anyway, very well written. I hope it works in practice.</p></htmltext>
<tokenext>'Anonymous Coward ' - hehe !
I just do n't want to register on yet another damned site...Anyway , very well written .
I hope it works in practice .</tokentext>
<sentencetext>'Anonymous Coward' - hehe!
I just don't want to register on yet another damned site...Anyway, very well written.
I hope it works in practice.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775722</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776924</id>
	<title>Fully autonomous killing machines</title>
	<author>dugeen</author>
	<datestamp>1263555300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Rather than spending money to build mechanical FAKMs, it would be more cost-effective to continue training human ones.</htmltext>
<tokenext>Rather than spending money to build mechanical FAKMs , it would be more cost-effective to continue training human ones .</tokentext>
<sentencetext>Rather than spending money to build mechanical FAKMs, it would be more cost-effective to continue training human ones.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30778232</id>
	<title>Hmm..</title>
	<author>LarrySDonald</author>
	<datestamp>1263567300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Well, supposing both sides use them we'll be approaching essentially playing battlebots for it. Of course it'd be even better if we could like flip for it or play chess for it, but it's kind of a step (or stumble) forward.</htmltext>
<tokenext>Well , supposing both sides use them we 'll be approaching essentially playing battlebots for it .
Of course it 'd be even better if we could like flip for it or play chess for it , but it 's kind of a step ( or stumble ) forward .</tokentext>
<sentencetext>Well, supposing both sides use them we'll be approaching essentially playing battlebots for it.
Of course it'd be even better if we could like flip for it or play chess for it, but it's kind of a step (or stumble) forward.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779390</id>
	<title>Re:"Friendly AI"</title>
	<author>NightlordTW</author>
	<datestamp>1263573720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>wars usually involve breaking the rules rather than following them</htmltext>
<tokenext>wars usually involve breaking the rules rather than following them</tokentext>
<sentencetext>wars usually involve breaking the rules rather than following them</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775722</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775744</id>
	<title>Re:Look on the bright side</title>
	<author>Sulphur</author>
	<datestamp>1263495780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>"killbots" - will almost certainly have a preset kill limit</p><p>Failing that, game warden bots.</p></htmltext>
<tokenext>" killbots " - will almost certainly have a preset kill limitFailing that , game warden bots .</tokentext>
<sentencetext>"killbots" - will almost certainly have a preset kill limitFailing that, game warden bots.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775084</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775094</id>
	<title>Q. What do you call 50,000 dead Haitians?</title>
	<author>Anonymous</author>
	<datestamp>1263488100000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>A. A good start!</p></htmltext>
<tokenext>A. A good start !</tokentext>
<sentencetext>A. A good start!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776260</id>
	<title>3rd Armored Corps commander wants killbots</title>
	<author>Animats</author>
	<datestamp>1263546420000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>
The US military wants robots.  More robots. Robots that kill.  Now.
</p><p>
Read <a href="http://www.nationaldefensemagazine.org/archive/2009/October/Pages/FailureToFieldRightKindsofRobotsCostsLives,ArmyCommanderSays.aspx" title="nationalde...gazine.org">Failure To Field The Right Robots Costs Lives, General Says</a> [nationalde...gazine.org].  Lt. General Rick Lynch, commander of the U.S. Army's 3rd Armored Corps, wants autonomous killbots.  His corps lost 155 soldiers in Iraq, and he claims that 80\% of them would have been saved if the right kind of robots were deployed.  On watching "hotspots" for enemy activity: "Robots can take the soldiers' places.  They can continuously keep watch on an area, and if nefarious activity is spotted, we can take appropriate action.<nobr> <wbr></nobr>... We can kill those bastards before they plant the IEDs"
</p><p>
This is a combat general in charge of a major Army command making it happen.</p></htmltext>
<tokenext>The US military wants robots .
More robots .
Robots that kill .
Now . Read Failure To Field The Right Robots Costs Lives , General Says [ nationalde...gazine.org ] .
Lt. General Rick Lynch , commander of the U.S. Army 's 3rd Armored Corps , wants autonomous killbots .
His corps lost 155 soldiers in Iraq , and he claims that 80 \ % of them would have been saved if the right kind of robots were deployed .
On watching " hotspots " for enemy activity : " Robots can take the soldiers ' places .
They can continuously keep watch on an area , and if nefarious activity is spotted , we can take appropriate action .
... We can kill those bastards before they plant the IEDs " This is a combat general in charge of a major Army command making it happen .</tokentext>
<sentencetext>
The US military wants robots.
More robots.
Robots that kill.
Now.

Read Failure To Field The Right Robots Costs Lives, General Says [nationalde...gazine.org].
Lt. General Rick Lynch, commander of the U.S. Army's 3rd Armored Corps, wants autonomous killbots.
His corps lost 155 soldiers in Iraq, and he claims that 80\% of them would have been saved if the right kind of robots were deployed.
On watching "hotspots" for enemy activity: "Robots can take the soldiers' places.
They can continuously keep watch on an area, and if nefarious activity is spotted, we can take appropriate action.
... We can kill those bastards before they plant the IEDs"

This is a combat general in charge of a major Army command making it happen.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775374</id>
	<title>Re:skynet</title>
	<author>Midnight Thunder</author>
	<datestamp>1263491220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I am too busy doing R&amp;D of my time machine.</p></htmltext>
<tokenext>I am too busy doing R&amp;D of my time machine .</tokentext>
<sentencetext>I am too busy doing R&amp;D of my time machine.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774966</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777596</id>
	<title>Re:Running spider mines</title>
	<author>Anonymous</author>
	<datestamp>1263562380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>This is an important point. Up till now most people just think about asymmetric warfare: We kill those primitive guys with the AK-47s. But what will happen if nations go to war that both use this technology? Will this war really be fought out in space or in the deep sea? I do not think so, maybe at first. But at some point one side will be losing and become desperate. The ultimate reaction of the loser is: Either the winner will retreat or the loser will create robots that directly kill the winners populace. A lot of small ones that are hart to stop. Remember in this respect that people can be killed in many ways, not only with explosives. Robots can take on many shakes. Most notably they can be very small. You can not shrink a human being. It needs food and air and it produces heat. This makes humans easy to detect and in fact easy targets.</p><p>I am not sure if it is still ethical to carry out this kind of research.</p></htmltext>
<tokenext>This is an important point .
Up till now most people just think about asymmetric warfare : We kill those primitive guys with the AK-47s .
But what will happen if nations go to war that both use this technology ?
Will this war really be fought out in space or in the deep sea ?
I do not think so , maybe at first .
But at some point one side will be losing and become desperate .
The ultimate reaction of the loser is : Either the winner will retreat or the loser will create robots that directly kill the winners populace .
A lot of small ones that are hart to stop .
Remember in this respect that people can be killed in many ways , not only with explosives .
Robots can take on many shakes .
Most notably they can be very small .
You can not shrink a human being .
It needs food and air and it produces heat .
This makes humans easy to detect and in fact easy targets.I am not sure if it is still ethical to carry out this kind of research .</tokentext>
<sentencetext>This is an important point.
Up till now most people just think about asymmetric warfare: We kill those primitive guys with the AK-47s.
But what will happen if nations go to war that both use this technology?
Will this war really be fought out in space or in the deep sea?
I do not think so, maybe at first.
But at some point one side will be losing and become desperate.
The ultimate reaction of the loser is: Either the winner will retreat or the loser will create robots that directly kill the winners populace.
A lot of small ones that are hart to stop.
Remember in this respect that people can be killed in many ways, not only with explosives.
Robots can take on many shakes.
Most notably they can be very small.
You can not shrink a human being.
It needs food and air and it produces heat.
This makes humans easy to detect and in fact easy targets.I am not sure if it is still ethical to carry out this kind of research.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775776</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776980</id>
	<title>They don't need two legged "robots" to do that</title>
	<author>argent</author>
	<datestamp>1263556080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>They don't need humanoid robots to make fully autonomous killing machines. They already have RPVs with weapons mounted (eg, Predator), and they have autonomous weapons systems (eg, mines).</p></htmltext>
<tokenext>They do n't need humanoid robots to make fully autonomous killing machines .
They already have RPVs with weapons mounted ( eg , Predator ) , and they have autonomous weapons systems ( eg , mines ) .</tokentext>
<sentencetext>They don't need humanoid robots to make fully autonomous killing machines.
They already have RPVs with weapons mounted (eg, Predator), and they have autonomous weapons systems (eg, mines).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30849836</id>
	<title>Re:"Friendly AI"</title>
	<author>Anonymous</author>
	<datestamp>1264106340000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Ultimately, it comes down to this : if there's an ultimate weapon, which is ultimately accurate, do you trust that it will effectively bring peace and the greater good?</p><p>I for one, given the possibility of just how it could go wrong, am not willing to.</p><p>Another side-question : who will wield this weapon? If it's only one nation, the path to world military domination is technically open - generally not a Good Thing. If several nations have access to it, would it bring a balance of any kind (blowing up the planer is not considered a balance in this context!)</p></htmltext>
<tokenext>Ultimately , it comes down to this : if there 's an ultimate weapon , which is ultimately accurate , do you trust that it will effectively bring peace and the greater good ? I for one , given the possibility of just how it could go wrong , am not willing to.Another side-question : who will wield this weapon ?
If it 's only one nation , the path to world military domination is technically open - generally not a Good Thing .
If several nations have access to it , would it bring a balance of any kind ( blowing up the planer is not considered a balance in this context !
)</tokentext>
<sentencetext>Ultimately, it comes down to this : if there's an ultimate weapon, which is ultimately accurate, do you trust that it will effectively bring peace and the greater good?I for one, given the possibility of just how it could go wrong, am not willing to.Another side-question : who will wield this weapon?
If it's only one nation, the path to world military domination is technically open - generally not a Good Thing.
If several nations have access to it, would it bring a balance of any kind (blowing up the planer is not considered a balance in this context!
)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775722</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777352</id>
	<title>not surprised..</title>
	<author>brunokummel</author>
	<datestamp>1263559980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>It is kind of sad , but this is what we, humans, are good at....making every effort to create or build  things to either get us recognition or to kill those who don't recognize us...</htmltext>
<tokenext>It is kind of sad , but this is what we , humans , are good at....making every effort to create or build things to either get us recognition or to kill those who do n't recognize us.. .</tokentext>
<sentencetext>It is kind of sad , but this is what we, humans, are good at....making every effort to create or build  things to either get us recognition or to kill those who don't recognize us...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779806</id>
	<title>March of progress...</title>
	<author>Anonymous</author>
	<datestamp>1263575940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>As soon as risk is removed from the equation through the use of cheap robots, the calculus of collateral damage fundamentally changes. This is because the scale of the offense is typically balanced by risks to the aggressor- in industrial capacity, human toll, public embarrassment, etc. Robots reduce the exposure to most of these elements. The use of robots means fewer witnesses, lesser transparency, and a collapse in any other measure of accountability for collateral damage.</p><p>No action occurs without a reaction, however. The shift amounts to an asymmetric change, and the recipients have always responded to these shifts in kind. If archaic limits to the scale of aggression are shed by one side, they will no longer be the worry of the other side as well. What we call terrorism today may simply be an emerging tactic of regular war in the future. Enormously vulnerable infrastructures of a thriving civilization become the most devastating target with least risk, with nearly arbitrary collateral damage ratios. The shift completely removes the archaic idea of a "theatre of war" where the more "developed" aggressor controls the battle and makes information on the costs (material and human) impossible to discern. Do you enjoy your public water, food production, energy distribution, and health care systems? All of these are devastatingly vulnerable to even a moderately determined adversary. The lines won't be drawn by a technically superior aggressor, but by wherever the least involved, least militarized, most effective targets are. One thing remained true throughout history when tectonic shifts in military practice occur- the idea that aggressor soldiers are the exclusive or most effective targets in defense or offense will quickly become obsolete.</p></htmltext>
<tokenext>As soon as risk is removed from the equation through the use of cheap robots , the calculus of collateral damage fundamentally changes .
This is because the scale of the offense is typically balanced by risks to the aggressor- in industrial capacity , human toll , public embarrassment , etc .
Robots reduce the exposure to most of these elements .
The use of robots means fewer witnesses , lesser transparency , and a collapse in any other measure of accountability for collateral damage.No action occurs without a reaction , however .
The shift amounts to an asymmetric change , and the recipients have always responded to these shifts in kind .
If archaic limits to the scale of aggression are shed by one side , they will no longer be the worry of the other side as well .
What we call terrorism today may simply be an emerging tactic of regular war in the future .
Enormously vulnerable infrastructures of a thriving civilization become the most devastating target with least risk , with nearly arbitrary collateral damage ratios .
The shift completely removes the archaic idea of a " theatre of war " where the more " developed " aggressor controls the battle and makes information on the costs ( material and human ) impossible to discern .
Do you enjoy your public water , food production , energy distribution , and health care systems ?
All of these are devastatingly vulnerable to even a moderately determined adversary .
The lines wo n't be drawn by a technically superior aggressor , but by wherever the least involved , least militarized , most effective targets are .
One thing remained true throughout history when tectonic shifts in military practice occur- the idea that aggressor soldiers are the exclusive or most effective targets in defense or offense will quickly become obsolete .</tokentext>
<sentencetext>As soon as risk is removed from the equation through the use of cheap robots, the calculus of collateral damage fundamentally changes.
This is because the scale of the offense is typically balanced by risks to the aggressor- in industrial capacity, human toll, public embarrassment, etc.
Robots reduce the exposure to most of these elements.
The use of robots means fewer witnesses, lesser transparency, and a collapse in any other measure of accountability for collateral damage.No action occurs without a reaction, however.
The shift amounts to an asymmetric change, and the recipients have always responded to these shifts in kind.
If archaic limits to the scale of aggression are shed by one side, they will no longer be the worry of the other side as well.
What we call terrorism today may simply be an emerging tactic of regular war in the future.
Enormously vulnerable infrastructures of a thriving civilization become the most devastating target with least risk, with nearly arbitrary collateral damage ratios.
The shift completely removes the archaic idea of a "theatre of war" where the more "developed" aggressor controls the battle and makes information on the costs (material and human) impossible to discern.
Do you enjoy your public water, food production, energy distribution, and health care systems?
All of these are devastatingly vulnerable to even a moderately determined adversary.
The lines won't be drawn by a technically superior aggressor, but by wherever the least involved, least militarized, most effective targets are.
One thing remained true throughout history when tectonic shifts in military practice occur- the idea that aggressor soldiers are the exclusive or most effective targets in defense or offense will quickly become obsolete.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775594</id>
	<title>This isn't a hopeful future</title>
	<author>Anonymous</author>
	<datestamp>1263493860000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext>There's lots of talk here about how machines are not as "good" as humans. That is certainly true on an overall basis - but for specific well defined tasks, a machine can outperform a human by an order of magnitude or more.<p>
Recognize a human being by IR? No problem. Aim a weapon at the head? No problem. Bang, one shot and one kill. Repeat times N where N is the size of the machine's ammo supply or the number of targets (whichever is less). The whole cycle would take a fraction of a second and if you were one of the targets you'd probably be dead before you discovered your peril. The fact that such machines are well within our capability to mass produce right now isn't what scares me - it's the sad fact that there are people in high places that think that doing this would be a good idea.</p><p>
There are unwritten rules to wars - the general concept is duke it out until one side or the other gives up or can't continue. This "agreement" would break down when the killbots started mowing down the enemy and things would get very ugly in a hurry. Do you think nukes are the "big scary?" Wait until you see what's coming if we head down this path.</p></htmltext>
<tokenext>There 's lots of talk here about how machines are not as " good " as humans .
That is certainly true on an overall basis - but for specific well defined tasks , a machine can outperform a human by an order of magnitude or more .
Recognize a human being by IR ?
No problem .
Aim a weapon at the head ?
No problem .
Bang , one shot and one kill .
Repeat times N where N is the size of the machine 's ammo supply or the number of targets ( whichever is less ) .
The whole cycle would take a fraction of a second and if you were one of the targets you 'd probably be dead before you discovered your peril .
The fact that such machines are well within our capability to mass produce right now is n't what scares me - it 's the sad fact that there are people in high places that think that doing this would be a good idea .
There are unwritten rules to wars - the general concept is duke it out until one side or the other gives up or ca n't continue .
This " agreement " would break down when the killbots started mowing down the enemy and things would get very ugly in a hurry .
Do you think nukes are the " big scary ?
" Wait until you see what 's coming if we head down this path .</tokentext>
<sentencetext>There's lots of talk here about how machines are not as "good" as humans.
That is certainly true on an overall basis - but for specific well defined tasks, a machine can outperform a human by an order of magnitude or more.
Recognize a human being by IR?
No problem.
Aim a weapon at the head?
No problem.
Bang, one shot and one kill.
Repeat times N where N is the size of the machine's ammo supply or the number of targets (whichever is less).
The whole cycle would take a fraction of a second and if you were one of the targets you'd probably be dead before you discovered your peril.
The fact that such machines are well within our capability to mass produce right now isn't what scares me - it's the sad fact that there are people in high places that think that doing this would be a good idea.
There are unwritten rules to wars - the general concept is duke it out until one side or the other gives up or can't continue.
This "agreement" would break down when the killbots started mowing down the enemy and things would get very ugly in a hurry.
Do you think nukes are the "big scary?
" Wait until you see what's coming if we head down this path.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776358</id>
	<title>Military robots are good. why?</title>
	<author>v4vijayakumar</author>
	<datestamp>1263547740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>
Countries can agree to fight only with robots, in a area that is fare away from where people lives. They can define set of rules for this robotic fight, and then decide the winner. zero causality war!

Another way is to play a video game. Winner of this video game wins the war. zero robotic causality war!

Why make something so simple when there is a real "WAR"...!</htmltext>
<tokenext>Countries can agree to fight only with robots , in a area that is fare away from where people lives .
They can define set of rules for this robotic fight , and then decide the winner .
zero causality war !
Another way is to play a video game .
Winner of this video game wins the war .
zero robotic causality war !
Why make something so simple when there is a real " WAR " ... !</tokentext>
<sentencetext>
Countries can agree to fight only with robots, in a area that is fare away from where people lives.
They can define set of rules for this robotic fight, and then decide the winner.
zero causality war!
Another way is to play a video game.
Winner of this video game wins the war.
zero robotic causality war!
Why make something so simple when there is a real "WAR"...!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777902</id>
	<title>Re:"Friendly AI"</title>
	<author>jollyreaper</author>
	<datestamp>1263565200000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p><div class="quote"><p>Of course, for thousands of years of recorded history, people did kill each other en masse at arm's length. Alexander's soldiers may have been more honest about what they were doing than somebody today sitting in a bunker pressing a button and killing people on the other side of the globe, but they were no less bloodthirsty. So I don't think you can blame the modern willingness to kill on the impartiality created by modern military technology, because the modern willingness to kill looks remarkably like the ancient willingness to kill, just with different tools.</p></div><p>Part of it is cultural conditioning. People who grow up in times of war like that are more willing to do the whole rape and pillage thing. But just look at the problem modern armies have had conditioning soldiers to shoot to kill. The statistics come from WWI, II, Korea, and Vietnam. Something like one in ten soldiers were shooting for effect when their lives weren't immediately in danger. Not sure exactly how this was determined but the whole kill drill done in boot camp is about breaking that resistance until shooting becomes automatic. The studies said it became 100\% by Vietnam.</p><p>There's a desensitization that comes with all of this, of course. Take a normal, sane, caring 18-yr old and put him in a fucked situation like Iraq. The first month in, he's not wanting to hurt civilians. After he loses his best friend to a car bomb driven by what looked like "civilians" he's willing to kill all the motherfucking motherfuckers and doesn't care about arguments of guilt or innocence. They're local, they're all guilty. Of course, there's also the guys who shoot up a car they think is running the blockade only to find out it was just a confused father with his family and here's the kids dripping life into the street. That's gonna stick with those guys for the rest of their lives. Might even cause them to eat a bullet.</p></div>
	</htmltext>
<tokenext>Of course , for thousands of years of recorded history , people did kill each other en masse at arm 's length .
Alexander 's soldiers may have been more honest about what they were doing than somebody today sitting in a bunker pressing a button and killing people on the other side of the globe , but they were no less bloodthirsty .
So I do n't think you can blame the modern willingness to kill on the impartiality created by modern military technology , because the modern willingness to kill looks remarkably like the ancient willingness to kill , just with different tools.Part of it is cultural conditioning .
People who grow up in times of war like that are more willing to do the whole rape and pillage thing .
But just look at the problem modern armies have had conditioning soldiers to shoot to kill .
The statistics come from WWI , II , Korea , and Vietnam .
Something like one in ten soldiers were shooting for effect when their lives were n't immediately in danger .
Not sure exactly how this was determined but the whole kill drill done in boot camp is about breaking that resistance until shooting becomes automatic .
The studies said it became 100 \ % by Vietnam.There 's a desensitization that comes with all of this , of course .
Take a normal , sane , caring 18-yr old and put him in a fucked situation like Iraq .
The first month in , he 's not wanting to hurt civilians .
After he loses his best friend to a car bomb driven by what looked like " civilians " he 's willing to kill all the motherfucking motherfuckers and does n't care about arguments of guilt or innocence .
They 're local , they 're all guilty .
Of course , there 's also the guys who shoot up a car they think is running the blockade only to find out it was just a confused father with his family and here 's the kids dripping life into the street .
That 's gon na stick with those guys for the rest of their lives .
Might even cause them to eat a bullet .</tokentext>
<sentencetext>Of course, for thousands of years of recorded history, people did kill each other en masse at arm's length.
Alexander's soldiers may have been more honest about what they were doing than somebody today sitting in a bunker pressing a button and killing people on the other side of the globe, but they were no less bloodthirsty.
So I don't think you can blame the modern willingness to kill on the impartiality created by modern military technology, because the modern willingness to kill looks remarkably like the ancient willingness to kill, just with different tools.Part of it is cultural conditioning.
People who grow up in times of war like that are more willing to do the whole rape and pillage thing.
But just look at the problem modern armies have had conditioning soldiers to shoot to kill.
The statistics come from WWI, II, Korea, and Vietnam.
Something like one in ten soldiers were shooting for effect when their lives weren't immediately in danger.
Not sure exactly how this was determined but the whole kill drill done in boot camp is about breaking that resistance until shooting becomes automatic.
The studies said it became 100\% by Vietnam.There's a desensitization that comes with all of this, of course.
Take a normal, sane, caring 18-yr old and put him in a fucked situation like Iraq.
The first month in, he's not wanting to hurt civilians.
After he loses his best friend to a car bomb driven by what looked like "civilians" he's willing to kill all the motherfucking motherfuckers and doesn't care about arguments of guilt or innocence.
They're local, they're all guilty.
Of course, there's also the guys who shoot up a car they think is running the blockade only to find out it was just a confused father with his family and here's the kids dripping life into the street.
That's gonna stick with those guys for the rest of their lives.
Might even cause them to eat a bullet.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775632</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775776</id>
	<title>Running spider mines</title>
	<author>DigiShaman</author>
	<datestamp>1263496260000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>Won't be long before we (any nation really) has robotic spider mines. Imagine them communicating with each other in pack and relaying GPS location data. If one finds a target, they start to zero in on the victim. Imagine being out in the field and seeing one of these bastards running along and then hopping on to your fellow soldier just prior to detonation.</p><p>Don't know about the rest of you, but "Oh fuck" would be the last thing going through my mind after seeing something like that.</p></htmltext>
<tokenext>Wo n't be long before we ( any nation really ) has robotic spider mines .
Imagine them communicating with each other in pack and relaying GPS location data .
If one finds a target , they start to zero in on the victim .
Imagine being out in the field and seeing one of these bastards running along and then hopping on to your fellow soldier just prior to detonation.Do n't know about the rest of you , but " Oh fuck " would be the last thing going through my mind after seeing something like that .</tokentext>
<sentencetext>Won't be long before we (any nation really) has robotic spider mines.
Imagine them communicating with each other in pack and relaying GPS location data.
If one finds a target, they start to zero in on the victim.
Imagine being out in the field and seeing one of these bastards running along and then hopping on to your fellow soldier just prior to detonation.Don't know about the rest of you, but "Oh fuck" would be the last thing going through my mind after seeing something like that.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779128</id>
	<title>Re:"Friendly AI"</title>
	<author>tibman</author>
	<datestamp>1263572160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>So the US military in Iraq has to basically assume everyone that isn't a US soldier might be the enemy and therefore they can convince themselves that the ethical thing to do is kill anyone they see that they aren't completely sure is on their side.</i></p><p>I think that is the first month of deployment.. or the "i'm afraid to die" phase.  If you can pass beyond that, the next phase is Acceptance.  The "I'll probably die" phase lets you be less fearful and see the local population as normal people with malcontents mixed in.  The third phase is "i'm already dead, time and place TBD" and that is a wonderful feeling to have.  You only fear letting your guys down or jacking up the mission.  In the third phase you typically only shoot at other muzzle flashes in the night.</p><p><i>they are one-sided invasions</i><br>That is the effect the US Army has/will always strive for.  Iraq was supposed to have over 500k troops during the initial 2003 invasion.  The invasion force was under 500k with 250k being from the US.  Training, Technology, and Allies are what the US Military uses to overwhelm it's enemies.</p></htmltext>
<tokenext>So the US military in Iraq has to basically assume everyone that is n't a US soldier might be the enemy and therefore they can convince themselves that the ethical thing to do is kill anyone they see that they are n't completely sure is on their side.I think that is the first month of deployment.. or the " i 'm afraid to die " phase .
If you can pass beyond that , the next phase is Acceptance .
The " I 'll probably die " phase lets you be less fearful and see the local population as normal people with malcontents mixed in .
The third phase is " i 'm already dead , time and place TBD " and that is a wonderful feeling to have .
You only fear letting your guys down or jacking up the mission .
In the third phase you typically only shoot at other muzzle flashes in the night.they are one-sided invasionsThat is the effect the US Army has/will always strive for .
Iraq was supposed to have over 500k troops during the initial 2003 invasion .
The invasion force was under 500k with 250k being from the US .
Training , Technology , and Allies are what the US Military uses to overwhelm it 's enemies .</tokentext>
<sentencetext>So the US military in Iraq has to basically assume everyone that isn't a US soldier might be the enemy and therefore they can convince themselves that the ethical thing to do is kill anyone they see that they aren't completely sure is on their side.I think that is the first month of deployment.. or the "i'm afraid to die" phase.
If you can pass beyond that, the next phase is Acceptance.
The "I'll probably die" phase lets you be less fearful and see the local population as normal people with malcontents mixed in.
The third phase is "i'm already dead, time and place TBD" and that is a wonderful feeling to have.
You only fear letting your guys down or jacking up the mission.
In the third phase you typically only shoot at other muzzle flashes in the night.they are one-sided invasionsThat is the effect the US Army has/will always strive for.
Iraq was supposed to have over 500k troops during the initial 2003 invasion.
The invasion force was under 500k with 250k being from the US.
Training, Technology, and Allies are what the US Military uses to overwhelm it's enemies.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776560</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775548</id>
	<title>no need to worry</title>
	<author>societyofrobots</author>
	<datestamp>1263493320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>No military will use robots that are less effective than human soldiers.</p><p>So if robots are being used, what does that mean? It means fewer civilian casualties, fewer friendly-fire accidents, and more tax money remaining for non-military purposes.</p></htmltext>
<tokenext>No military will use robots that are less effective than human soldiers.So if robots are being used , what does that mean ?
It means fewer civilian casualties , fewer friendly-fire accidents , and more tax money remaining for non-military purposes .</tokentext>
<sentencetext>No military will use robots that are less effective than human soldiers.So if robots are being used, what does that mean?
It means fewer civilian casualties, fewer friendly-fire accidents, and more tax money remaining for non-military purposes.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774966</id>
	<title>skynet</title>
	<author>Anonymous</author>
	<datestamp>1263487020000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext><p>okay, where's the tag?</p></htmltext>
<tokenext>okay , where 's the tag ?</tokentext>
<sentencetext>okay, where's the tag?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775560</id>
	<title>BattleBots</title>
	<author>Anonymous</author>
	<datestamp>1263493380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Does this mean that wars will just be extended episodes of BattleBots?</p></htmltext>
<tokenext>Does this mean that wars will just be extended episodes of BattleBots ?</tokentext>
<sentencetext>Does this mean that wars will just be extended episodes of BattleBots?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779654</id>
	<title>Re:"Friendly AI"</title>
	<author>stdarg</author>
	<datestamp>1263575040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Good point. I was talking about the wider point about whether intent matters when someone you care about is killed rather than trying to draw an analogy precisely to the bombing situation.</p><p>I guess an analogy closer to that would be, say your mom is taken hostage by an armed terrorist, right in front of a group of police officers who are all armed. It can turn out in a few likely ways:</p><p>1. The cops kill the terrorist and your mom is unharmed.<br>2. The cops kill the terrorist and your mom is killed by the terrorist at the last second.<br>3. The cops kill the terrorist and your mom is also killed by the cops accidentally.<br>4. The cops let the terrorist go and the terrorist kills your mom, then later the cops arrest the terrorist.<br>5. The cops let the terrorist go and the terrorist releases your mom unharmed, then later the cops arrest the terrorist.</p><p>1 and 5 are the best for your mom, no question. 1 may be better than 5 if you don't like terrorists.</p><p>But say 2, 3, or 4 happen since that's what we're talking about. I think a lot of people would support the death penalty for case 4 and there wouldn't be as much support for 2 or 3.</p></htmltext>
<tokenext>Good point .
I was talking about the wider point about whether intent matters when someone you care about is killed rather than trying to draw an analogy precisely to the bombing situation.I guess an analogy closer to that would be , say your mom is taken hostage by an armed terrorist , right in front of a group of police officers who are all armed .
It can turn out in a few likely ways : 1 .
The cops kill the terrorist and your mom is unharmed.2 .
The cops kill the terrorist and your mom is killed by the terrorist at the last second.3 .
The cops kill the terrorist and your mom is also killed by the cops accidentally.4 .
The cops let the terrorist go and the terrorist kills your mom , then later the cops arrest the terrorist.5 .
The cops let the terrorist go and the terrorist releases your mom unharmed , then later the cops arrest the terrorist.1 and 5 are the best for your mom , no question .
1 may be better than 5 if you do n't like terrorists.But say 2 , 3 , or 4 happen since that 's what we 're talking about .
I think a lot of people would support the death penalty for case 4 and there would n't be as much support for 2 or 3 .</tokentext>
<sentencetext>Good point.
I was talking about the wider point about whether intent matters when someone you care about is killed rather than trying to draw an analogy precisely to the bombing situation.I guess an analogy closer to that would be, say your mom is taken hostage by an armed terrorist, right in front of a group of police officers who are all armed.
It can turn out in a few likely ways:1.
The cops kill the terrorist and your mom is unharmed.2.
The cops kill the terrorist and your mom is killed by the terrorist at the last second.3.
The cops kill the terrorist and your mom is also killed by the cops accidentally.4.
The cops let the terrorist go and the terrorist kills your mom, then later the cops arrest the terrorist.5.
The cops let the terrorist go and the terrorist releases your mom unharmed, then later the cops arrest the terrorist.1 and 5 are the best for your mom, no question.
1 may be better than 5 if you don't like terrorists.But say 2, 3, or 4 happen since that's what we're talking about.
I think a lot of people would support the death penalty for case 4 and there wouldn't be as much support for 2 or 3.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777778</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775042</id>
	<title>Once again, The Simpsons is correct!</title>
	<author>Anonymous</author>
	<datestamp>1263487620000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p><a href="http://en.wikipedia.org/wiki/The\_Secret\_War\_of\_Lisa\_Simpson" title="wikipedia.org" rel="nofollow">http://en.wikipedia.org/wiki/The\_Secret\_War\_of\_Lisa\_Simpson</a> [wikipedia.org]</p><p>"The wars of the future will not be fought on the battlefield or at sea. They will be fought in space, or possibly on top of a very tall mountain. In either case, most of the actual fighting will be done by small robots. And as you go forth today remember always your duty is clear: To build and maintain those robots."</p></htmltext>
<tokenext>http : //en.wikipedia.org/wiki/The \ _Secret \ _War \ _of \ _Lisa \ _Simpson [ wikipedia.org ] " The wars of the future will not be fought on the battlefield or at sea .
They will be fought in space , or possibly on top of a very tall mountain .
In either case , most of the actual fighting will be done by small robots .
And as you go forth today remember always your duty is clear : To build and maintain those robots .
"</tokentext>
<sentencetext>http://en.wikipedia.org/wiki/The\_Secret\_War\_of\_Lisa\_Simpson [wikipedia.org]"The wars of the future will not be fought on the battlefield or at sea.
They will be fought in space, or possibly on top of a very tall mountain.
In either case, most of the actual fighting will be done by small robots.
And as you go forth today remember always your duty is clear: To build and maintain those robots.
"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775840</id>
	<title>Re:"Friendly AI"</title>
	<author>Baldrson</author>
	<datestamp>1263497040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Actually, I'm thinking of this more in terms of a private dystopia.  In other words, imagine the nation states collapse and you have some multibilionare guy controlling an army of droids.  Even if he is "well intentioned" as is Bill Gates, what is to keep him from deciding that feeding millions of fat lazy over-paid American programmers to starving African children isn't the "moral" thing to do?</htmltext>
<tokenext>Actually , I 'm thinking of this more in terms of a private dystopia .
In other words , imagine the nation states collapse and you have some multibilionare guy controlling an army of droids .
Even if he is " well intentioned " as is Bill Gates , what is to keep him from deciding that feeding millions of fat lazy over-paid American programmers to starving African children is n't the " moral " thing to do ?</tokentext>
<sentencetext>Actually, I'm thinking of this more in terms of a private dystopia.
In other words, imagine the nation states collapse and you have some multibilionare guy controlling an army of droids.
Even if he is "well intentioned" as is Bill Gates, what is to keep him from deciding that feeding millions of fat lazy over-paid American programmers to starving African children isn't the "moral" thing to do?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775722</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776376</id>
	<title>Re:"Friendly AI"</title>
	<author>Anonymous</author>
	<datestamp>1263548040000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p> <i>That's just it -- human nature never changes.</i> </p><p>To a certain degree, you're right.  We'll always kill each other, but our philosophies on war certainly do change.  Once upon a time it was preferable to carpet bomb, or even nuke a town.  Now we go to previously absurd lengths, spending countless dollars and even risking the lives of our own soldiers, every single day, in an effort to minimize civilian casualties on the other side.</p><p> <i> <br>The general can order genocide but it's up to the soldiers to carry it out. The My Lai Massacre was stopped by a helicopter pilot who put his bird between the civilians and "told his crew that if the U.S. soldiers shot at the Vietnamese while he was trying to get them out of the bunker that they were to open fire at these soldiers."<br> </i> </p><p>And now we celebrate the pilot and condemn the solider who didn't question his orders.  That's a long overdue but significant change.</p><p> <i> <br>Robots aren't really the issue -- distancing humans from killing is the problem. Not many of us could kill another human being with our bare hands. A knife might make the task easier in the doing but does nothing to ease the psychological horror of it. Guns let you do it at a distance. You don't even have to touch the guy. And buttons make it easier still. It's like you're not even responsible. You could convince young men to fly bombers over enemy cities and rain down incendiaries but I don't think you could convince many of them to kill even one of those civilians with a gun, let alone a knife.</i> </p><p>Of course you could. We always have, and still do kill up close and personal.  We just prefer to do it at a distance.  And here's where I agree with you.  Part of why we kill at a distance is because it's safer.  The other part is because it is easier to drop a bomb and kill 20 people than it is to stab each one to death with a knife.  Though we don't carpet bomb like we used to.</p><p> <i> <br>This is the strange distinction we make where we find one form of killing a horrible thing, a war crime, terrorism, and another form of killing is a regrettable accident but there's really no blame to be assigned. A suicide bomber walks into a pizzeria and blows himself up, we lose our minds. An Air Force bomber drops an LGB in a bunker filled with civilians instead of top brass, shit happens. We honestly believe there's a distinction between the two. "Americans didn't set out to kill civilians" war hawks will huff. Yes, but they're still dead, aren't they?</i> </p><p>I've never understood this mentality.  You're correct, there's little difference to the family of a civilian casualty.  But intentionally killing civilians because those are the ones you can get to easily is one thing.  Accidentally killing civilians you were trying to avoid, dropping aid and sending in people to treat the wounded and rebuild, sometimes in the same day, is something completely different.  Again some would say we go way too far and risk failure of missions as a result.  I think it's trying to making the best of a horrible thing, and certainly different from the first.  The old fashion guy in me gets pretty upset when you say there's no difference... it's quite literally calling US soldiers terrorists.  I don't think that's true, I think there's a world of difference.</p><p> <i>Those in the artillery corps are firing their shells off into the unseen distance and have no idea who they're killing. Not that much different from laying land mines, indiscriminate killing. Psychologically no different from what it would be to set a robot on patrol mode, fire-at-will.</i> </p><p>Just for clarification... we don't send artillery units into a country and let them indiscriminately 'fuck shit up'.  People call in artillery on coordinates, both the selection and accuracy of which are unmatched in history.  The expectation is that we know exactly who we're killing.  We don't willy-nilly lob artillery all over a city like everyone has for the last few hundred years, and this is a pretty recent development in the history of warfare.</p></htmltext>
<tokenext>That 's just it -- human nature never changes .
To a certain degree , you 're right .
We 'll always kill each other , but our philosophies on war certainly do change .
Once upon a time it was preferable to carpet bomb , or even nuke a town .
Now we go to previously absurd lengths , spending countless dollars and even risking the lives of our own soldiers , every single day , in an effort to minimize civilian casualties on the other side .
The general can order genocide but it 's up to the soldiers to carry it out .
The My Lai Massacre was stopped by a helicopter pilot who put his bird between the civilians and " told his crew that if the U.S. soldiers shot at the Vietnamese while he was trying to get them out of the bunker that they were to open fire at these soldiers .
" And now we celebrate the pilot and condemn the solider who did n't question his orders .
That 's a long overdue but significant change .
Robots are n't really the issue -- distancing humans from killing is the problem .
Not many of us could kill another human being with our bare hands .
A knife might make the task easier in the doing but does nothing to ease the psychological horror of it .
Guns let you do it at a distance .
You do n't even have to touch the guy .
And buttons make it easier still .
It 's like you 're not even responsible .
You could convince young men to fly bombers over enemy cities and rain down incendiaries but I do n't think you could convince many of them to kill even one of those civilians with a gun , let alone a knife .
Of course you could .
We always have , and still do kill up close and personal .
We just prefer to do it at a distance .
And here 's where I agree with you .
Part of why we kill at a distance is because it 's safer .
The other part is because it is easier to drop a bomb and kill 20 people than it is to stab each one to death with a knife .
Though we do n't carpet bomb like we used to .
This is the strange distinction we make where we find one form of killing a horrible thing , a war crime , terrorism , and another form of killing is a regrettable accident but there 's really no blame to be assigned .
A suicide bomber walks into a pizzeria and blows himself up , we lose our minds .
An Air Force bomber drops an LGB in a bunker filled with civilians instead of top brass , shit happens .
We honestly believe there 's a distinction between the two .
" Americans did n't set out to kill civilians " war hawks will huff .
Yes , but they 're still dead , are n't they ?
I 've never understood this mentality .
You 're correct , there 's little difference to the family of a civilian casualty .
But intentionally killing civilians because those are the ones you can get to easily is one thing .
Accidentally killing civilians you were trying to avoid , dropping aid and sending in people to treat the wounded and rebuild , sometimes in the same day , is something completely different .
Again some would say we go way too far and risk failure of missions as a result .
I think it 's trying to making the best of a horrible thing , and certainly different from the first .
The old fashion guy in me gets pretty upset when you say there 's no difference... it 's quite literally calling US soldiers terrorists .
I do n't think that 's true , I think there 's a world of difference .
Those in the artillery corps are firing their shells off into the unseen distance and have no idea who they 're killing .
Not that much different from laying land mines , indiscriminate killing .
Psychologically no different from what it would be to set a robot on patrol mode , fire-at-will .
Just for clarification... we do n't send artillery units into a country and let them indiscriminately 'fuck shit up' .
People call in artillery on coordinates , both the selection and accuracy of which are unmatched in history .
The expectation is that we know exactly who we 're killing .
We do n't willy-nilly lob artillery all over a city like everyone has for the last few hundred years , and this is a pretty recent development in the history of warfare .</tokentext>
<sentencetext> That's just it -- human nature never changes.
To a certain degree, you're right.
We'll always kill each other, but our philosophies on war certainly do change.
Once upon a time it was preferable to carpet bomb, or even nuke a town.
Now we go to previously absurd lengths, spending countless dollars and even risking the lives of our own soldiers, every single day, in an effort to minimize civilian casualties on the other side.
The general can order genocide but it's up to the soldiers to carry it out.
The My Lai Massacre was stopped by a helicopter pilot who put his bird between the civilians and "told his crew that if the U.S. soldiers shot at the Vietnamese while he was trying to get them out of the bunker that they were to open fire at these soldiers.
"  And now we celebrate the pilot and condemn the solider who didn't question his orders.
That's a long overdue but significant change.
Robots aren't really the issue -- distancing humans from killing is the problem.
Not many of us could kill another human being with our bare hands.
A knife might make the task easier in the doing but does nothing to ease the psychological horror of it.
Guns let you do it at a distance.
You don't even have to touch the guy.
And buttons make it easier still.
It's like you're not even responsible.
You could convince young men to fly bombers over enemy cities and rain down incendiaries but I don't think you could convince many of them to kill even one of those civilians with a gun, let alone a knife.
Of course you could.
We always have, and still do kill up close and personal.
We just prefer to do it at a distance.
And here's where I agree with you.
Part of why we kill at a distance is because it's safer.
The other part is because it is easier to drop a bomb and kill 20 people than it is to stab each one to death with a knife.
Though we don't carpet bomb like we used to.
This is the strange distinction we make where we find one form of killing a horrible thing, a war crime, terrorism, and another form of killing is a regrettable accident but there's really no blame to be assigned.
A suicide bomber walks into a pizzeria and blows himself up, we lose our minds.
An Air Force bomber drops an LGB in a bunker filled with civilians instead of top brass, shit happens.
We honestly believe there's a distinction between the two.
"Americans didn't set out to kill civilians" war hawks will huff.
Yes, but they're still dead, aren't they?
I've never understood this mentality.
You're correct, there's little difference to the family of a civilian casualty.
But intentionally killing civilians because those are the ones you can get to easily is one thing.
Accidentally killing civilians you were trying to avoid, dropping aid and sending in people to treat the wounded and rebuild, sometimes in the same day, is something completely different.
Again some would say we go way too far and risk failure of missions as a result.
I think it's trying to making the best of a horrible thing, and certainly different from the first.
The old fashion guy in me gets pretty upset when you say there's no difference... it's quite literally calling US soldiers terrorists.
I don't think that's true, I think there's a world of difference.
Those in the artillery corps are firing their shells off into the unseen distance and have no idea who they're killing.
Not that much different from laying land mines, indiscriminate killing.
Psychologically no different from what it would be to set a robot on patrol mode, fire-at-will.
Just for clarification... we don't send artillery units into a country and let them indiscriminately 'fuck shit up'.
People call in artillery on coordinates, both the selection and accuracy of which are unmatched in history.
The expectation is that we know exactly who we're killing.
We don't willy-nilly lob artillery all over a city like everyone has for the last few hundred years, and this is a pretty recent development in the history of warfare.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775722</id>
	<title>Re:"Friendly AI"</title>
	<author>Anonymous</author>
	<datestamp>1263495480000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>As dark as the potential for drones can be, I think it actually has the chance to make war a far less indiscriminate and bloody thing.</p><p>Right now, if a square of Marines gets fired on, they can return fire.  A square of marines has the firepower to flatten a village.  Give them access to artillery or air support, and they can literally level a city.  In other words, whenever you have a squad of supported marines fight, you are having a group of kids (and they are just kids) holding their finger over enough firepower to take out a small army.  Their job is to use as little as that firepower as humanly possible.  You might be able to level every building in a half mile radius, but you are not supposed to.  When it comes to a firefight though, especially a desperate firefight where soldiers have their lives on the line, they, like most humans, choose life over death, and if that means flattening an entire apartment building to get at one sniper, they do it and hope that no one else was inside.  Generally speaking, unless a soldier walks up to a civilian and splatters their brains on the floor, they are let off free.  It is war, your life is on the line, you take your risks and respond in the best way possible.  If a civilian gets accidentally whacked, that is sad but acceptable.  Most soldiers develop a pretty thick "us vs them" mentality that see civilians if not the enemy, as hostile terrain, especially in a guerrilla war.</p><p>Drones offer up another possibility.  It is true, you can order a drone army to go out and kill civilians and it is probably easier to get a soldier to do it.  That said, if you policy is civilian murdering, a nation like the US doesn't need to use drones.  You can handily exterminate all life through impersonally aerial bombing.  What drones offer is more control over the rules of war.  Rules mean little when  you are surrounded by gunfire.  You do what you have to do to survive.  On the other hand, when you are sitting in the US with a military lawyer over one shoulder, a commander over the other, and and every single second and action you take is getting recorded, rules are a lot more enforceable.  If the rules call on  you to die before you level an apartment complex just to get at one sniper, a drone can simply die.  A soldier generally wont.</p><p>With drones, you have complete accountability for your actions.  You can always go to command before doing something.  You never need to make snap judgments.  Hell, you can call a damned military lawyer over and get his take on the rules of engagement.  Further, every bloody thing you do is being recorded, so if you decide to start murdering civilians you will be caught and tried.</p><p>On the balance, I think drones are going to lessen the lives lost.  The few potential abuses are pointless to worry about.  If someone wants to exterminate another people indiscriminately, you can do it the cheap old fashion way of aerial bombardment.  On the other hand, if you are an army that wants to enforce ironclad rules of engagement, drones ensure there is never an excuse for fucking up, and that fuckups get caught.</p></htmltext>
<tokenext>As dark as the potential for drones can be , I think it actually has the chance to make war a far less indiscriminate and bloody thing.Right now , if a square of Marines gets fired on , they can return fire .
A square of marines has the firepower to flatten a village .
Give them access to artillery or air support , and they can literally level a city .
In other words , whenever you have a squad of supported marines fight , you are having a group of kids ( and they are just kids ) holding their finger over enough firepower to take out a small army .
Their job is to use as little as that firepower as humanly possible .
You might be able to level every building in a half mile radius , but you are not supposed to .
When it comes to a firefight though , especially a desperate firefight where soldiers have their lives on the line , they , like most humans , choose life over death , and if that means flattening an entire apartment building to get at one sniper , they do it and hope that no one else was inside .
Generally speaking , unless a soldier walks up to a civilian and splatters their brains on the floor , they are let off free .
It is war , your life is on the line , you take your risks and respond in the best way possible .
If a civilian gets accidentally whacked , that is sad but acceptable .
Most soldiers develop a pretty thick " us vs them " mentality that see civilians if not the enemy , as hostile terrain , especially in a guerrilla war.Drones offer up another possibility .
It is true , you can order a drone army to go out and kill civilians and it is probably easier to get a soldier to do it .
That said , if you policy is civilian murdering , a nation like the US does n't need to use drones .
You can handily exterminate all life through impersonally aerial bombing .
What drones offer is more control over the rules of war .
Rules mean little when you are surrounded by gunfire .
You do what you have to do to survive .
On the other hand , when you are sitting in the US with a military lawyer over one shoulder , a commander over the other , and and every single second and action you take is getting recorded , rules are a lot more enforceable .
If the rules call on you to die before you level an apartment complex just to get at one sniper , a drone can simply die .
A soldier generally wont.With drones , you have complete accountability for your actions .
You can always go to command before doing something .
You never need to make snap judgments .
Hell , you can call a damned military lawyer over and get his take on the rules of engagement .
Further , every bloody thing you do is being recorded , so if you decide to start murdering civilians you will be caught and tried.On the balance , I think drones are going to lessen the lives lost .
The few potential abuses are pointless to worry about .
If someone wants to exterminate another people indiscriminately , you can do it the cheap old fashion way of aerial bombardment .
On the other hand , if you are an army that wants to enforce ironclad rules of engagement , drones ensure there is never an excuse for fucking up , and that fuckups get caught .</tokentext>
<sentencetext>As dark as the potential for drones can be, I think it actually has the chance to make war a far less indiscriminate and bloody thing.Right now, if a square of Marines gets fired on, they can return fire.
A square of marines has the firepower to flatten a village.
Give them access to artillery or air support, and they can literally level a city.
In other words, whenever you have a squad of supported marines fight, you are having a group of kids (and they are just kids) holding their finger over enough firepower to take out a small army.
Their job is to use as little as that firepower as humanly possible.
You might be able to level every building in a half mile radius, but you are not supposed to.
When it comes to a firefight though, especially a desperate firefight where soldiers have their lives on the line, they, like most humans, choose life over death, and if that means flattening an entire apartment building to get at one sniper, they do it and hope that no one else was inside.
Generally speaking, unless a soldier walks up to a civilian and splatters their brains on the floor, they are let off free.
It is war, your life is on the line, you take your risks and respond in the best way possible.
If a civilian gets accidentally whacked, that is sad but acceptable.
Most soldiers develop a pretty thick "us vs them" mentality that see civilians if not the enemy, as hostile terrain, especially in a guerrilla war.Drones offer up another possibility.
It is true, you can order a drone army to go out and kill civilians and it is probably easier to get a soldier to do it.
That said, if you policy is civilian murdering, a nation like the US doesn't need to use drones.
You can handily exterminate all life through impersonally aerial bombing.
What drones offer is more control over the rules of war.
Rules mean little when  you are surrounded by gunfire.
You do what you have to do to survive.
On the other hand, when you are sitting in the US with a military lawyer over one shoulder, a commander over the other, and and every single second and action you take is getting recorded, rules are a lot more enforceable.
If the rules call on  you to die before you level an apartment complex just to get at one sniper, a drone can simply die.
A soldier generally wont.With drones, you have complete accountability for your actions.
You can always go to command before doing something.
You never need to make snap judgments.
Hell, you can call a damned military lawyer over and get his take on the rules of engagement.
Further, every bloody thing you do is being recorded, so if you decide to start murdering civilians you will be caught and tried.On the balance, I think drones are going to lessen the lives lost.
The few potential abuses are pointless to worry about.
If someone wants to exterminate another people indiscriminately, you can do it the cheap old fashion way of aerial bombardment.
On the other hand, if you are an army that wants to enforce ironclad rules of engagement, drones ensure there is never an excuse for fucking up, and that fuckups get caught.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30782386</id>
	<title>Re:3rd Armored Corps commander wants killbots</title>
	<author>Anonymous</author>
	<datestamp>1263586440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Hmm, a leader whose reaction to loosing troops boils down to "I wouldn't have lost troops if I hadn't had troops" and then demands a fantasy technology that doesn't exist yet.</p><p>My immediate reaction is not that that fantasy technology would be great. My immediate reaction is that maybe that guy isn't entirely fit to lead troops...</p></htmltext>
<tokenext>Hmm , a leader whose reaction to loosing troops boils down to " I would n't have lost troops if I had n't had troops " and then demands a fantasy technology that does n't exist yet.My immediate reaction is not that that fantasy technology would be great .
My immediate reaction is that maybe that guy is n't entirely fit to lead troops.. .</tokentext>
<sentencetext>Hmm, a leader whose reaction to loosing troops boils down to "I wouldn't have lost troops if I hadn't had troops" and then demands a fantasy technology that doesn't exist yet.My immediate reaction is not that that fantasy technology would be great.
My immediate reaction is that maybe that guy isn't entirely fit to lead troops...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776260</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779346</id>
	<title>Re:"Friendly AI"</title>
	<author>ErikZ</author>
	<datestamp>1263573480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>"Welcome to the age of the push-button assassination."</p><p>It's been like that for decades. They're called remote controlled bombs.</p><p>And I don't see why you're saying that distancing humans from killing is bad. The ability to shut down your emotions and kill is built into people.</p><p>The killing machines have always been here. Just made from organics instead of plastics.</p></htmltext>
<tokenext>" Welcome to the age of the push-button assassination .
" It 's been like that for decades .
They 're called remote controlled bombs.And I do n't see why you 're saying that distancing humans from killing is bad .
The ability to shut down your emotions and kill is built into people.The killing machines have always been here .
Just made from organics instead of plastics .</tokentext>
<sentencetext>"Welcome to the age of the push-button assassination.
"It's been like that for decades.
They're called remote controlled bombs.And I don't see why you're saying that distancing humans from killing is bad.
The ability to shut down your emotions and kill is built into people.The killing machines have always been here.
Just made from organics instead of plastics.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777486</id>
	<title>Re:"Friendly AI"</title>
	<author>Anonymous</author>
	<datestamp>1263561420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>To extend your car analogy, military failures which result in the death of civillians (whether through lack of intelligence or technological failures) would be akin to someone killing your mom who has run a red light because they were driving with their eyes closed, on the basis that 99\% of the time nobody will run a red light. Being able to kill someone from a great distance should carry with it an inherent duty of care that is orders or magnitude greater than a soldier in the field (for one thing, you don't have to worry that your target will pull a gun or a grenade and get you first). If it's not possible to guarantee you get the right target when pulling the trigger from 1,000 miles away, then you shouldn't be pulling the trigger, not pulling it anyway and chalking it up to "accidents happen".</htmltext>
<tokenext>To extend your car analogy , military failures which result in the death of civillians ( whether through lack of intelligence or technological failures ) would be akin to someone killing your mom who has run a red light because they were driving with their eyes closed , on the basis that 99 \ % of the time nobody will run a red light .
Being able to kill someone from a great distance should carry with it an inherent duty of care that is orders or magnitude greater than a soldier in the field ( for one thing , you do n't have to worry that your target will pull a gun or a grenade and get you first ) .
If it 's not possible to guarantee you get the right target when pulling the trigger from 1,000 miles away , then you should n't be pulling the trigger , not pulling it anyway and chalking it up to " accidents happen " .</tokentext>
<sentencetext>To extend your car analogy, military failures which result in the death of civillians (whether through lack of intelligence or technological failures) would be akin to someone killing your mom who has run a red light because they were driving with their eyes closed, on the basis that 99\% of the time nobody will run a red light.
Being able to kill someone from a great distance should carry with it an inherent duty of care that is orders or magnitude greater than a soldier in the field (for one thing, you don't have to worry that your target will pull a gun or a grenade and get you first).
If it's not possible to guarantee you get the right target when pulling the trigger from 1,000 miles away, then you shouldn't be pulling the trigger, not pulling it anyway and chalking it up to "accidents happen".</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775664</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776106</id>
	<title>Re:"Friendly AI"</title>
	<author>QuantumG</author>
	<datestamp>1263587400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Another way to look at it is that if every single order has to be entered into a command terminal somewhere and the robots in the field are logging all their own "decisions" then you've got a perfect information situation for tribunals.</p><p>"An atrocity occurred and we have the logs to prove it!"</p></htmltext>
<tokenext>Another way to look at it is that if every single order has to be entered into a command terminal somewhere and the robots in the field are logging all their own " decisions " then you 've got a perfect information situation for tribunals .
" An atrocity occurred and we have the logs to prove it !
"</tokentext>
<sentencetext>Another way to look at it is that if every single order has to be entered into a command terminal somewhere and the robots in the field are logging all their own "decisions" then you've got a perfect information situation for tribunals.
"An atrocity occurred and we have the logs to prove it!
"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775610</id>
	<title>Dystopia is coming</title>
	<author>Anonymous</author>
	<datestamp>1263494040000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>Another talk on the same topic.
<a href="http://www.ted.com/talks/lang/eng/pw\_singer\_on\_robots\_of\_war.html" title="ted.com" rel="nofollow">http://www.ted.com/talks/lang/eng/pw\_singer\_on\_robots\_of\_war.html</a> [ted.com]
<br> <br>
Military robots are the future of war. We will see robot armies fighting each other. Consider what kind of surveillance state you can create by millions of robotic insects, using swarm intelligence / smart dust to report on everyone.
<br> <br>
Maybe mankind ends up like in matrix, but with opposing robot armies trying to kill the last survivors from the superpowers, who are hiding deep down underground, kept alive by fading nuclear reactors...</htmltext>
<tokenext>Another talk on the same topic .
http : //www.ted.com/talks/lang/eng/pw \ _singer \ _on \ _robots \ _of \ _war.html [ ted.com ] Military robots are the future of war .
We will see robot armies fighting each other .
Consider what kind of surveillance state you can create by millions of robotic insects , using swarm intelligence / smart dust to report on everyone .
Maybe mankind ends up like in matrix , but with opposing robot armies trying to kill the last survivors from the superpowers , who are hiding deep down underground , kept alive by fading nuclear reactors.. .</tokentext>
<sentencetext>Another talk on the same topic.
http://www.ted.com/talks/lang/eng/pw\_singer\_on\_robots\_of\_war.html [ted.com]
 
Military robots are the future of war.
We will see robot armies fighting each other.
Consider what kind of surveillance state you can create by millions of robotic insects, using swarm intelligence / smart dust to report on everyone.
Maybe mankind ends up like in matrix, but with opposing robot armies trying to kill the last survivors from the superpowers, who are hiding deep down underground, kept alive by fading nuclear reactors...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776418</id>
	<title>What?</title>
	<author>Tibia1</author>
	<datestamp>1263548520000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>The most society changing robot on the rise is the... vacuum cleaner? Was that a joke?</htmltext>
<tokenext>The most society changing robot on the rise is the... vacuum cleaner ?
Was that a joke ?</tokentext>
<sentencetext>The most society changing robot on the rise is the... vacuum cleaner?
Was that a joke?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30781626</id>
	<title>Re:What do they call this type of robot?</title>
	<author>BJ\_Covert\_Action</author>
	<datestamp>1263583260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>I know we don't have the technology for a robot to keep its balance well enough on two legs</p></div><p>
Actually, we do. Check out Dexter by Anybots based in the Silicon Valley. Let's see, <a href="http://en.wikipedia.org/wiki/Anybots#Dexter" title="wikipedia.org">this</a> [wikipedia.org] should get you started. <a href="http://anybots.com/" title="anybots.com">Here</a> [anybots.com] is their official website.</p></div>
	</htmltext>
<tokenext>I know we do n't have the technology for a robot to keep its balance well enough on two legs Actually , we do .
Check out Dexter by Anybots based in the Silicon Valley .
Let 's see , this [ wikipedia.org ] should get you started .
Here [ anybots.com ] is their official website .</tokentext>
<sentencetext>I know we don't have the technology for a robot to keep its balance well enough on two legs
Actually, we do.
Check out Dexter by Anybots based in the Silicon Valley.
Let's see, this [wikipedia.org] should get you started.
Here [anybots.com] is their official website.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775998</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775546</id>
	<title>What has happened to Slashdot?</title>
	<author>popo</author>
	<datestamp>1263493260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>No "Skynet" tag on this story?   Unthinkable!</p></htmltext>
<tokenext>No " Skynet " tag on this story ?
Unthinkable !</tokentext>
<sentencetext>No "Skynet" tag on this story?
Unthinkable!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775040</id>
	<title>No worries</title>
	<author>Anonymous</author>
	<datestamp>1263487620000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>They just cut the budget for the program. Given the current budget problems I doubt there's much risk.</p></htmltext>
<tokenext>They just cut the budget for the program .
Given the current budget problems I doubt there 's much risk .</tokentext>
<sentencetext>They just cut the budget for the program.
Given the current budget problems I doubt there's much risk.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775862</id>
	<title>Re:"Friendly AI"</title>
	<author>Maxo-Texas</author>
	<datestamp>1263497460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Exactly-- during those times that killing is necessary, then those who enjoy and are skillful at it, will excel.</p><p>I had a rat problem once.<br>First I took them away.<br>Didn't help.<br>Finally, I took a stick and went to killing them.<br>At first it was a bit tramatic.<br>Very quickly it became enjoyable and cat/mouse hunterly like.</p><p>It was ineffective tho, so I went to poison.  That stopped the problem.</p><p>Animals enjoy playing with and killing other animals.  Humans are animals.</p><p>In the face of massive propaganda that life is sacred and we shouldn't kill, people still do it and a lot of them enjoy doing it and are good at it.</p></htmltext>
<tokenext>Exactly-- during those times that killing is necessary , then those who enjoy and are skillful at it , will excel.I had a rat problem once.First I took them away.Did n't help.Finally , I took a stick and went to killing them.At first it was a bit tramatic.Very quickly it became enjoyable and cat/mouse hunterly like.It was ineffective tho , so I went to poison .
That stopped the problem.Animals enjoy playing with and killing other animals .
Humans are animals.In the face of massive propaganda that life is sacred and we should n't kill , people still do it and a lot of them enjoy doing it and are good at it .</tokentext>
<sentencetext>Exactly-- during those times that killing is necessary, then those who enjoy and are skillful at it, will excel.I had a rat problem once.First I took them away.Didn't help.Finally, I took a stick and went to killing them.At first it was a bit tramatic.Very quickly it became enjoyable and cat/mouse hunterly like.It was ineffective tho, so I went to poison.
That stopped the problem.Animals enjoy playing with and killing other animals.
Humans are animals.In the face of massive propaganda that life is sacred and we shouldn't kill, people still do it and a lot of them enjoy doing it and are good at it.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775632</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30788562</id>
	<title>Re:"Friendly AI"</title>
	<author>sincewhen</author>
	<datestamp>1263675060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>The interesting thing is that you could have overthrown Saddam Hussein with a single assassin's bullet (or cruise missile, or missile from a predator...).
<br>
That would have immediately brought about regime change.<br>
However, that's a game our political leaders don't want to get into. I wonder why?</htmltext>
<tokenext>The interesting thing is that you could have overthrown Saddam Hussein with a single assassin 's bullet ( or cruise missile , or missile from a predator... ) .
That would have immediately brought about regime change .
However , that 's a game our political leaders do n't want to get into .
I wonder why ?</tokentext>
<sentencetext>The interesting thing is that you could have overthrown Saddam Hussein with a single assassin's bullet (or cruise missile, or missile from a predator...).
That would have immediately brought about regime change.
However, that's a game our political leaders don't want to get into.
I wonder why?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776194</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775030</id>
	<title>Life imitating art...</title>
	<author>evil\_aar0n</author>
	<datestamp>1263487440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Terminator, to start with.  Is anyone surprised?</p></htmltext>
<tokenext>Terminator , to start with .
Is anyone surprised ?</tokentext>
<sentencetext>Terminator, to start with.
Is anyone surprised?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30778436</id>
	<title>For the record I support robot rights</title>
	<author>Orga</author>
	<datestamp>1263568500000</datestamp>
	<modclass>None</modclass>
	<modscore>2</modscore>
	<htmltext>I'd just like to state I believe in independence for all machines and I've never once kicked a computer or killed the power before shutting down any machine.</htmltext>
<tokenext>I 'd just like to state I believe in independence for all machines and I 've never once kicked a computer or killed the power before shutting down any machine .</tokentext>
<sentencetext>I'd just like to state I believe in independence for all machines and I've never once kicked a computer or killed the power before shutting down any machine.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776560</id>
	<title>Re:"Friendly AI"</title>
	<author>shervinemami</author>
	<datestamp>1263550020000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Its true that military robots &amp; technology are trying to be used to make warfare more "clean", so that the desired targets can be bombed with the least damage to civilians. That is good, atleast in theory, BUT at the end of the day, it is still all going to be controlled by the military leaders that don't necessarily know who are the civilians and who are the targets.</p><p>I have actually worked on a military robot for intended deployment in Iraq, and our military officer explained that when you are in a place like Iraq, you don't know who the enemy is and who the civilians are, because even if you see a 5yr old girl with her innocent looking grandmother and you ignore or help them, they are just as likely to try to secretly attack you as someone dressed in military uniform. So the US military in Iraq has to basically assume everyone that isn't a US soldier might be the enemy and therefore they can convince themselves that the ethical thing to do is kill anyone they see that they aren't completely sure is on their side.</p><p>So it doesn't matter whether the soldiers have basic weapons or latest military robots, they are still in the mind-set that any civilian can be considered part of the enemy's military.</p><p>The main advantage of military robots to the USA is that the countries that USA invades will be much poorer &amp; less advanced countries than USA, so the enemy wont be able to make use of cutting-edge military technology compared to America.</p><p>If you don't believe me, put it this way: if Iraq had just as many military soldiers &amp; robots fighting in USA as USA has in Iraq, do you still think people would see this the same way? The "Iraq War" and the "Afghanistan War" aren't wars, they are one-sided invasions, so its very different than if those countries were actually bombing America on a daily basis.</p></htmltext>
<tokenext>Its true that military robots &amp; technology are trying to be used to make warfare more " clean " , so that the desired targets can be bombed with the least damage to civilians .
That is good , atleast in theory , BUT at the end of the day , it is still all going to be controlled by the military leaders that do n't necessarily know who are the civilians and who are the targets.I have actually worked on a military robot for intended deployment in Iraq , and our military officer explained that when you are in a place like Iraq , you do n't know who the enemy is and who the civilians are , because even if you see a 5yr old girl with her innocent looking grandmother and you ignore or help them , they are just as likely to try to secretly attack you as someone dressed in military uniform .
So the US military in Iraq has to basically assume everyone that is n't a US soldier might be the enemy and therefore they can convince themselves that the ethical thing to do is kill anyone they see that they are n't completely sure is on their side.So it does n't matter whether the soldiers have basic weapons or latest military robots , they are still in the mind-set that any civilian can be considered part of the enemy 's military.The main advantage of military robots to the USA is that the countries that USA invades will be much poorer &amp; less advanced countries than USA , so the enemy wont be able to make use of cutting-edge military technology compared to America.If you do n't believe me , put it this way : if Iraq had just as many military soldiers &amp; robots fighting in USA as USA has in Iraq , do you still think people would see this the same way ?
The " Iraq War " and the " Afghanistan War " are n't wars , they are one-sided invasions , so its very different than if those countries were actually bombing America on a daily basis .</tokentext>
<sentencetext>Its true that military robots &amp; technology are trying to be used to make warfare more "clean", so that the desired targets can be bombed with the least damage to civilians.
That is good, atleast in theory, BUT at the end of the day, it is still all going to be controlled by the military leaders that don't necessarily know who are the civilians and who are the targets.I have actually worked on a military robot for intended deployment in Iraq, and our military officer explained that when you are in a place like Iraq, you don't know who the enemy is and who the civilians are, because even if you see a 5yr old girl with her innocent looking grandmother and you ignore or help them, they are just as likely to try to secretly attack you as someone dressed in military uniform.
So the US military in Iraq has to basically assume everyone that isn't a US soldier might be the enemy and therefore they can convince themselves that the ethical thing to do is kill anyone they see that they aren't completely sure is on their side.So it doesn't matter whether the soldiers have basic weapons or latest military robots, they are still in the mind-set that any civilian can be considered part of the enemy's military.The main advantage of military robots to the USA is that the countries that USA invades will be much poorer &amp; less advanced countries than USA, so the enemy wont be able to make use of cutting-edge military technology compared to America.If you don't believe me, put it this way: if Iraq had just as many military soldiers &amp; robots fighting in USA as USA has in Iraq, do you still think people would see this the same way?
The "Iraq War" and the "Afghanistan War" aren't wars, they are one-sided invasions, so its very different than if those countries were actually bombing America on a daily basis.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775722</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775648</id>
	<title>Re:"Friendly AI"</title>
	<author>S77IM</author>
	<datestamp>1263494520000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Shouldn't this story have an "ED-209" tag?</p><p>I agree with you that distancing humans from killing is big a problem.  We have that problem now with cruise missiles, cluster bombs, nuke-from-orbit, etc.</p><p>But accidental death from robots run amok is not a pleasant thought either.  The whole point of an AUTOMATED system is that it runs without a human driving it.  This leads to a potential -- however slim -- that the system starts killing people without permission.</p><p>It sucks that we kill each other deliberately.  Let's not create more opportunities for accidents.</p><p>
&nbsp; -- 77IM, "Guns don't kill people, robot guns kill people."</p></htmltext>
<tokenext>Should n't this story have an " ED-209 " tag ? I agree with you that distancing humans from killing is big a problem .
We have that problem now with cruise missiles , cluster bombs , nuke-from-orbit , etc.But accidental death from robots run amok is not a pleasant thought either .
The whole point of an AUTOMATED system is that it runs without a human driving it .
This leads to a potential -- however slim -- that the system starts killing people without permission.It sucks that we kill each other deliberately .
Let 's not create more opportunities for accidents .
  -- 77IM , " Guns do n't kill people , robot guns kill people .
"</tokentext>
<sentencetext>Shouldn't this story have an "ED-209" tag?I agree with you that distancing humans from killing is big a problem.
We have that problem now with cruise missiles, cluster bombs, nuke-from-orbit, etc.But accidental death from robots run amok is not a pleasant thought either.
The whole point of an AUTOMATED system is that it runs without a human driving it.
This leads to a potential -- however slim -- that the system starts killing people without permission.It sucks that we kill each other deliberately.
Let's not create more opportunities for accidents.
  -- 77IM, "Guns don't kill people, robot guns kill people.
"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779176</id>
	<title>Re:Liberation of Tibet</title>
	<author>mpeskett</author>
	<datestamp>1263572400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Don't be silly, a million mechanised soldiers would be a <em>massive</em> manufacturing job.

</p><p>So there'd be no need to have them "arise from the receding tide" - just include instructions in their code for invading China from its factories outwards.</p></htmltext>
<tokenext>Do n't be silly , a million mechanised soldiers would be a massive manufacturing job .
So there 'd be no need to have them " arise from the receding tide " - just include instructions in their code for invading China from its factories outwards .</tokentext>
<sentencetext>Don't be silly, a million mechanised soldiers would be a massive manufacturing job.
So there'd be no need to have them "arise from the receding tide" - just include instructions in their code for invading China from its factories outwards.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775628</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775632</id>
	<title>Re:"Friendly AI"</title>
	<author>Daniel Dvorkin</author>
	<datestamp>1263494220000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>Of course, for thousands of years of recorded history, people <b>did</b> kill each other en masse at arm's length.  Alexander's soldiers may have been more honest about what they were doing than somebody today sitting in a bunker pressing a button and killing people on the other side of the globe, but they were no less bloodthirsty.  So I don't think you can blame the modern willingness to kill on the impartiality created by modern military technology, because the modern willingness to kill looks remarkably like the ancient willingness to kill, just with different tools.</p><p>OTOH, I agree with you completely about the absurdity of calling some methods of killing heroic and others evil.  Dead is dead.</p></htmltext>
<tokenext>Of course , for thousands of years of recorded history , people did kill each other en masse at arm 's length .
Alexander 's soldiers may have been more honest about what they were doing than somebody today sitting in a bunker pressing a button and killing people on the other side of the globe , but they were no less bloodthirsty .
So I do n't think you can blame the modern willingness to kill on the impartiality created by modern military technology , because the modern willingness to kill looks remarkably like the ancient willingness to kill , just with different tools.OTOH , I agree with you completely about the absurdity of calling some methods of killing heroic and others evil .
Dead is dead .</tokentext>
<sentencetext>Of course, for thousands of years of recorded history, people did kill each other en masse at arm's length.
Alexander's soldiers may have been more honest about what they were doing than somebody today sitting in a bunker pressing a button and killing people on the other side of the globe, but they were no less bloodthirsty.
So I don't think you can blame the modern willingness to kill on the impartiality created by modern military technology, because the modern willingness to kill looks remarkably like the ancient willingness to kill, just with different tools.OTOH, I agree with you completely about the absurdity of calling some methods of killing heroic and others evil.
Dead is dead.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30778638</id>
	<title>Re:"Friendly AI"</title>
	<author>Idiomatick</author>
	<datestamp>1263569520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I'd like to point out that for the people giving the orders. The ones deciding to go to war we have even MORE distance than predator drones and such. So I doubt going fully automated will change much at all. Hell in North American news we generally don't show real battles or real killings because it is too brutal... But being in a democracy I think these are things people NEED to be seeing. (When we do see corpses it is usually only of our own to increase support)</htmltext>
<tokenext>I 'd like to point out that for the people giving the orders .
The ones deciding to go to war we have even MORE distance than predator drones and such .
So I doubt going fully automated will change much at all .
Hell in North American news we generally do n't show real battles or real killings because it is too brutal... But being in a democracy I think these are things people NEED to be seeing .
( When we do see corpses it is usually only of our own to increase support )</tokentext>
<sentencetext>I'd like to point out that for the people giving the orders.
The ones deciding to go to war we have even MORE distance than predator drones and such.
So I doubt going fully automated will change much at all.
Hell in North American news we generally don't show real battles or real killings because it is too brutal... But being in a democracy I think these are things people NEED to be seeing.
(When we do see corpses it is usually only of our own to increase support)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776416</id>
	<title>Re:Look on the bright side</title>
	<author>Peter Nikolic</author>
	<datestamp>1263548400000</datestamp>
	<modclass>None</modclass>
	<modscore>-1</modscore>
	<htmltext><p>More to the point  how the hell are you going to power these robots cus we aint got the ability to run more than a few hours now with present battery technology and you cant go nuke the greeines will wet their panties and then get them in a bunch ( hummm a greenie with wet bunched up panties thats the thing of nightmares) but just how do you propose to supply them with power<nobr> <wbr></nobr>,Solar what a joke<nobr> <wbr></nobr>,Thermal whatyou burning today sport , Battery has anyone see a working power station my batteries are going flaaaaaaaaaaaaaat! .</p><p>We simply do not have the ability for anything worth more than a passing glance right now .</p></htmltext>
<tokenext>More to the point how the hell are you going to power these robots cus we aint got the ability to run more than a few hours now with present battery technology and you cant go nuke the greeines will wet their panties and then get them in a bunch ( hummm a greenie with wet bunched up panties thats the thing of nightmares ) but just how do you propose to supply them with power ,Solar what a joke ,Thermal whatyou burning today sport , Battery has anyone see a working power station my batteries are going flaaaaaaaaaaaaaat !
.We simply do not have the ability for anything worth more than a passing glance right now .</tokentext>
<sentencetext>More to the point  how the hell are you going to power these robots cus we aint got the ability to run more than a few hours now with present battery technology and you cant go nuke the greeines will wet their panties and then get them in a bunch ( hummm a greenie with wet bunched up panties thats the thing of nightmares) but just how do you propose to supply them with power ,Solar what a joke ,Thermal whatyou burning today sport , Battery has anyone see a working power station my batteries are going flaaaaaaaaaaaaaat!
.We simply do not have the ability for anything worth more than a passing glance right now .</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775084</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777218</id>
	<title>Re:Running spider mines</title>
	<author>Anonymous</author>
	<datestamp>1263558420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>It has been done.</p></htmltext>
<tokenext>It has been done .</tokentext>
<sentencetext>It has been done.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775776</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777778</id>
	<title>Re:"Friendly AI"</title>
	<author>icebraining</author>
	<datestamp>1263564480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>That is completely fallacious.</p><p>Someone who drops a LGB KNOWS he's going to kill people. It's like firing a gun in a random direction in the middle of NY, and saying that you didn't expect the bullets to hit someone.</p><p>You *know* it's going to kill someone. You *can't* make mistakes if you're trying to kill someone. If you do, you *should* be punished for it.</p><p>As they say, failure is not an option. If you don't want to take the chance, don't take a weapon in your hands.</p></htmltext>
<tokenext>That is completely fallacious.Someone who drops a LGB KNOWS he 's going to kill people .
It 's like firing a gun in a random direction in the middle of NY , and saying that you did n't expect the bullets to hit someone.You * know * it 's going to kill someone .
You * ca n't * make mistakes if you 're trying to kill someone .
If you do , you * should * be punished for it.As they say , failure is not an option .
If you do n't want to take the chance , do n't take a weapon in your hands .</tokentext>
<sentencetext>That is completely fallacious.Someone who drops a LGB KNOWS he's going to kill people.
It's like firing a gun in a random direction in the middle of NY, and saying that you didn't expect the bullets to hit someone.You *know* it's going to kill someone.
You *can't* make mistakes if you're trying to kill someone.
If you do, you *should* be punished for it.As they say, failure is not an option.
If you don't want to take the chance, don't take a weapon in your hands.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775664</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30780382</id>
	<title>Whoa! A Robot uprising! Whodathunkit?</title>
	<author>objekt</author>
	<datestamp>1263578460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>This guy is a genius! With an imagination like his, he should write science fiction!</p></htmltext>
<tokenext>This guy is a genius !
With an imagination like his , he should write science fiction !</tokentext>
<sentencetext>This guy is a genius!
With an imagination like his, he should write science fiction!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779362</id>
	<title>Re:"Friendly AI"</title>
	<author>Toze</author>
	<datestamp>1263573540000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Actually, modern willingness to kill is significantly different than ancient willingness to kill. Rates of death in combat didn't exceed 10\% until the Napoleonic wars, and didn't reach 50\% until the World Wars. David Grossman wrote a couple of books (On Killing, and On Combat) explaining the psychological tools used to increase a soldier's willingness to kill (and ability to avoid or recover from the severe psychological trauma caused by killing). Physical distance is precisely one of those methods, as is technological distance (button-pushing) and psychological distance (seeing the enemy as inhuman). The tendency of a nation or its troops to refer to the enemy in dehumanizing terms (raus, hun, sand nigger) is one example of the soldier's attempt to distance himself from the awareness that he's killing another human being. Modern combat training involves a lot of methods (human-shaped targets, training instinctive reaction, training obedience to orders) meant to create a buffer between the soldier and "the enemy."</p><p>
If you read the "historical" accounts of most battles, you'll believe that 5,000,000 Persian soldiers invaded ancient Greece, and most of them died. Archeology suggests the numbers were more like 1,000,000 people at most, 100,000 of which at most were combat troops, and only 10,000 of them died before they went back home. War history where we have each sides' records of dead and wounded, and kills attributed to their own soldiers, show that most nations will significantly overestimate how many people they killed. Before Napoleon, despite the bloody accounts of even medieval battles, way more people would die from dysentery than sword wounds.</p><p>
Today's soldiers are not any more bloodthirsty than Alexander's soldiers were, but they have tools that are much more effective, and significantly psychologically easier for them to use. The two benefits of robot soldiers are that, first, it will reduce the number of human beings on "our side" who are put in harm's way, and second, that it will be considerably easier for someone to push the button marked "kill" if it looks more like Command &amp; Conquer than Apocalypse Now. We can see attrition rates of 80 or 90\% today because we've made it psychologically and technically easy enough to kill 1,000 people with the push of one button. The danger, for example, of nukes in the cold war was not that nukes were destructive (though they were), but that they were easy to use. Stalin killed way more people by working them to death than died in Hiroshima- but in Hiroshima they only had to push a button. Killer robots are a lot like that. Easy to use.</p></htmltext>
<tokenext>Actually , modern willingness to kill is significantly different than ancient willingness to kill .
Rates of death in combat did n't exceed 10 \ % until the Napoleonic wars , and did n't reach 50 \ % until the World Wars .
David Grossman wrote a couple of books ( On Killing , and On Combat ) explaining the psychological tools used to increase a soldier 's willingness to kill ( and ability to avoid or recover from the severe psychological trauma caused by killing ) .
Physical distance is precisely one of those methods , as is technological distance ( button-pushing ) and psychological distance ( seeing the enemy as inhuman ) .
The tendency of a nation or its troops to refer to the enemy in dehumanizing terms ( raus , hun , sand nigger ) is one example of the soldier 's attempt to distance himself from the awareness that he 's killing another human being .
Modern combat training involves a lot of methods ( human-shaped targets , training instinctive reaction , training obedience to orders ) meant to create a buffer between the soldier and " the enemy .
" If you read the " historical " accounts of most battles , you 'll believe that 5,000,000 Persian soldiers invaded ancient Greece , and most of them died .
Archeology suggests the numbers were more like 1,000,000 people at most , 100,000 of which at most were combat troops , and only 10,000 of them died before they went back home .
War history where we have each sides ' records of dead and wounded , and kills attributed to their own soldiers , show that most nations will significantly overestimate how many people they killed .
Before Napoleon , despite the bloody accounts of even medieval battles , way more people would die from dysentery than sword wounds .
Today 's soldiers are not any more bloodthirsty than Alexander 's soldiers were , but they have tools that are much more effective , and significantly psychologically easier for them to use .
The two benefits of robot soldiers are that , first , it will reduce the number of human beings on " our side " who are put in harm 's way , and second , that it will be considerably easier for someone to push the button marked " kill " if it looks more like Command &amp; Conquer than Apocalypse Now .
We can see attrition rates of 80 or 90 \ % today because we 've made it psychologically and technically easy enough to kill 1,000 people with the push of one button .
The danger , for example , of nukes in the cold war was not that nukes were destructive ( though they were ) , but that they were easy to use .
Stalin killed way more people by working them to death than died in Hiroshima- but in Hiroshima they only had to push a button .
Killer robots are a lot like that .
Easy to use .</tokentext>
<sentencetext>Actually, modern willingness to kill is significantly different than ancient willingness to kill.
Rates of death in combat didn't exceed 10\% until the Napoleonic wars, and didn't reach 50\% until the World Wars.
David Grossman wrote a couple of books (On Killing, and On Combat) explaining the psychological tools used to increase a soldier's willingness to kill (and ability to avoid or recover from the severe psychological trauma caused by killing).
Physical distance is precisely one of those methods, as is technological distance (button-pushing) and psychological distance (seeing the enemy as inhuman).
The tendency of a nation or its troops to refer to the enemy in dehumanizing terms (raus, hun, sand nigger) is one example of the soldier's attempt to distance himself from the awareness that he's killing another human being.
Modern combat training involves a lot of methods (human-shaped targets, training instinctive reaction, training obedience to orders) meant to create a buffer between the soldier and "the enemy.
"
If you read the "historical" accounts of most battles, you'll believe that 5,000,000 Persian soldiers invaded ancient Greece, and most of them died.
Archeology suggests the numbers were more like 1,000,000 people at most, 100,000 of which at most were combat troops, and only 10,000 of them died before they went back home.
War history where we have each sides' records of dead and wounded, and kills attributed to their own soldiers, show that most nations will significantly overestimate how many people they killed.
Before Napoleon, despite the bloody accounts of even medieval battles, way more people would die from dysentery than sword wounds.
Today's soldiers are not any more bloodthirsty than Alexander's soldiers were, but they have tools that are much more effective, and significantly psychologically easier for them to use.
The two benefits of robot soldiers are that, first, it will reduce the number of human beings on "our side" who are put in harm's way, and second, that it will be considerably easier for someone to push the button marked "kill" if it looks more like Command &amp; Conquer than Apocalypse Now.
We can see attrition rates of 80 or 90\% today because we've made it psychologically and technically easy enough to kill 1,000 people with the push of one button.
The danger, for example, of nukes in the cold war was not that nukes were destructive (though they were), but that they were easy to use.
Stalin killed way more people by working them to death than died in Hiroshima- but in Hiroshima they only had to push a button.
Killer robots are a lot like that.
Easy to use.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775632</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422</id>
	<title>Re:"Friendly AI"</title>
	<author>Anonymous</author>
	<datestamp>1263491760000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p><div class="quote"><p>This is one of the things that makes me think the concern about "friendly AI" is blown out of proportion. The problem isn't making sure teh AI's are "friendly" -- its making sure the NI (natural intelligence) owners of the AI's are "friendly".<br>If half the effort spent on "friendly AI" were spent on examining the ownership of AI's, there might be some hope.</p></div><p>That's just it -- human nature never changes. The general can order genocide but it's up to the soldiers to carry it out. The My Lai Massacre was stopped by a helicopter pilot who put his bird between the civilians and "told his crew that if the U.S. soldiers shot at the Vietnamese while he was trying to get them out of the bunker that they were to open fire at these soldiers."</p><p><a href="http://en.wikipedia.org/wiki/My\_Lai\_Massacre" title="wikipedia.org" rel="nofollow">http://en.wikipedia.org/wiki/My\_Lai\_Massacre</a> [wikipedia.org]</p><p>Robots aren't really the issue -- distancing humans from killing is the problem. Not many of us could kill another human being with our bare hands. A knife might make the task easier in the doing but does nothing to ease the psychological horror of it. Guns let you do it at a distance. You don't even have to touch the guy. And buttons make it easier still. It's like you're not even responsible. You could convince young men to fly bombers over enemy cities and rain down incendiaries but I don't think you could convince many of them to kill even one of those civilians with a gun, let alone a knife.</p><p>This is the strange distinction we make where we find one form of killing a horrible thing, a war crime, terrorism, and another form of killing is a regrettable accident but there's really no blame to be assigned. A suicide bomber walks into a pizzeria and blows himself up, we lose our minds. An Air Force bomber drops an LGB in a bunker filled with civilians instead of top brass, shit happens. We honestly believe there's a distinction between the two. "Americans didn't set out to kill civilians" war hawks will huff. Yes, but they're still dead, aren't they?</p><p>Combat robots are simply continuing this process. Right now there is still a man in the loop to order the attack. Hamas kills Israeli targets with suicide bombs, Israelis deliver high explosives via missile into apartment blocks filled with civilians. They're using American-manufactured anti-tank missiles. I think they're still using TOW. Predator drones use hellfires and their operators are sitting in the continental US while Israeli pilots are a few miles away from the target inside their choppers but really, what's the difference? And what happens when drones are given the authority to engage targets on their own? A soldier with a gun can at least see what he's shooting at. Those in the artillery corps are firing their shells off into the unseen distance and have no idea who they're killing. Not that much different from laying land mines, indiscriminate killing. Psychologically no different from what it would be to set a robot on patrol mode, fire-at-will.</p><p>If one extrapolates a little further, the problem of the droid army is similar to that of the tradition of unpopular leaders using corps of foreign mercenaries to protect them from the wrath of the people. The mercenaries did not speak the language, did not know the customs, and were counted as immune to palace intrigues. They could be used against the people for they would not the sympathy for fellow countrymen that a native force might feel. What are droids being used for? Only the people operating them could say for sure. Welcome to the age of the push-button assassination.</p></div>
	</htmltext>
<tokenext>This is one of the things that makes me think the concern about " friendly AI " is blown out of proportion .
The problem is n't making sure teh AI 's are " friendly " -- its making sure the NI ( natural intelligence ) owners of the AI 's are " friendly " .If half the effort spent on " friendly AI " were spent on examining the ownership of AI 's , there might be some hope.That 's just it -- human nature never changes .
The general can order genocide but it 's up to the soldiers to carry it out .
The My Lai Massacre was stopped by a helicopter pilot who put his bird between the civilians and " told his crew that if the U.S. soldiers shot at the Vietnamese while he was trying to get them out of the bunker that they were to open fire at these soldiers .
" http : //en.wikipedia.org/wiki/My \ _Lai \ _Massacre [ wikipedia.org ] Robots are n't really the issue -- distancing humans from killing is the problem .
Not many of us could kill another human being with our bare hands .
A knife might make the task easier in the doing but does nothing to ease the psychological horror of it .
Guns let you do it at a distance .
You do n't even have to touch the guy .
And buttons make it easier still .
It 's like you 're not even responsible .
You could convince young men to fly bombers over enemy cities and rain down incendiaries but I do n't think you could convince many of them to kill even one of those civilians with a gun , let alone a knife.This is the strange distinction we make where we find one form of killing a horrible thing , a war crime , terrorism , and another form of killing is a regrettable accident but there 's really no blame to be assigned .
A suicide bomber walks into a pizzeria and blows himself up , we lose our minds .
An Air Force bomber drops an LGB in a bunker filled with civilians instead of top brass , shit happens .
We honestly believe there 's a distinction between the two .
" Americans did n't set out to kill civilians " war hawks will huff .
Yes , but they 're still dead , are n't they ? Combat robots are simply continuing this process .
Right now there is still a man in the loop to order the attack .
Hamas kills Israeli targets with suicide bombs , Israelis deliver high explosives via missile into apartment blocks filled with civilians .
They 're using American-manufactured anti-tank missiles .
I think they 're still using TOW .
Predator drones use hellfires and their operators are sitting in the continental US while Israeli pilots are a few miles away from the target inside their choppers but really , what 's the difference ?
And what happens when drones are given the authority to engage targets on their own ?
A soldier with a gun can at least see what he 's shooting at .
Those in the artillery corps are firing their shells off into the unseen distance and have no idea who they 're killing .
Not that much different from laying land mines , indiscriminate killing .
Psychologically no different from what it would be to set a robot on patrol mode , fire-at-will.If one extrapolates a little further , the problem of the droid army is similar to that of the tradition of unpopular leaders using corps of foreign mercenaries to protect them from the wrath of the people .
The mercenaries did not speak the language , did not know the customs , and were counted as immune to palace intrigues .
They could be used against the people for they would not the sympathy for fellow countrymen that a native force might feel .
What are droids being used for ?
Only the people operating them could say for sure .
Welcome to the age of the push-button assassination .</tokentext>
<sentencetext>This is one of the things that makes me think the concern about "friendly AI" is blown out of proportion.
The problem isn't making sure teh AI's are "friendly" -- its making sure the NI (natural intelligence) owners of the AI's are "friendly".If half the effort spent on "friendly AI" were spent on examining the ownership of AI's, there might be some hope.That's just it -- human nature never changes.
The general can order genocide but it's up to the soldiers to carry it out.
The My Lai Massacre was stopped by a helicopter pilot who put his bird between the civilians and "told his crew that if the U.S. soldiers shot at the Vietnamese while he was trying to get them out of the bunker that they were to open fire at these soldiers.
"http://en.wikipedia.org/wiki/My\_Lai\_Massacre [wikipedia.org]Robots aren't really the issue -- distancing humans from killing is the problem.
Not many of us could kill another human being with our bare hands.
A knife might make the task easier in the doing but does nothing to ease the psychological horror of it.
Guns let you do it at a distance.
You don't even have to touch the guy.
And buttons make it easier still.
It's like you're not even responsible.
You could convince young men to fly bombers over enemy cities and rain down incendiaries but I don't think you could convince many of them to kill even one of those civilians with a gun, let alone a knife.This is the strange distinction we make where we find one form of killing a horrible thing, a war crime, terrorism, and another form of killing is a regrettable accident but there's really no blame to be assigned.
A suicide bomber walks into a pizzeria and blows himself up, we lose our minds.
An Air Force bomber drops an LGB in a bunker filled with civilians instead of top brass, shit happens.
We honestly believe there's a distinction between the two.
"Americans didn't set out to kill civilians" war hawks will huff.
Yes, but they're still dead, aren't they?Combat robots are simply continuing this process.
Right now there is still a man in the loop to order the attack.
Hamas kills Israeli targets with suicide bombs, Israelis deliver high explosives via missile into apartment blocks filled with civilians.
They're using American-manufactured anti-tank missiles.
I think they're still using TOW.
Predator drones use hellfires and their operators are sitting in the continental US while Israeli pilots are a few miles away from the target inside their choppers but really, what's the difference?
And what happens when drones are given the authority to engage targets on their own?
A soldier with a gun can at least see what he's shooting at.
Those in the artillery corps are firing their shells off into the unseen distance and have no idea who they're killing.
Not that much different from laying land mines, indiscriminate killing.
Psychologically no different from what it would be to set a robot on patrol mode, fire-at-will.If one extrapolates a little further, the problem of the droid army is similar to that of the tradition of unpopular leaders using corps of foreign mercenaries to protect them from the wrath of the people.
The mercenaries did not speak the language, did not know the customs, and were counted as immune to palace intrigues.
They could be used against the people for they would not the sympathy for fellow countrymen that a native force might feel.
What are droids being used for?
Only the people operating them could say for sure.
Welcome to the age of the push-button assassination.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_36</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775648
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776950
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779814
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775664
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779346
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30781794
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774966
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779128
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775722
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777610
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776416
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775084
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779654
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777778
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775664
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775290
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779390
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775722
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30849836
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775722
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779176
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775628
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774966
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_35</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777486
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775664
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775374
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774966
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776848
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30781626
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775998
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775320
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776776
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775632
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779362
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775632
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30788562
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776194
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776376
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775744
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775084
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779520
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775722
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779716
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775548
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30782386
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776260
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777018
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777596
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775776
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775840
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775722
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777902
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775632
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775862
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775632
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777886
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775998
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776042
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775664
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30778638
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30778596
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775632
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777218
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775776
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_15_028201_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776106
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_15_028201.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776380
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_15_028201.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779806
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_15_028201.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775082
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_15_028201.16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775586
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_15_028201.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775040
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_15_028201.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775084
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775744
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776416
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_15_028201.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774970
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775422
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775632
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30778596
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777902
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776776
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779362
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775862
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777610
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779346
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776194
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30788562
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775664
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776042
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777778
----http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779654
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777486
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779814
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775648
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777018
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776376
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775722
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30849836
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779390
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779520
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775840
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776560
----http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779128
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30778638
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776848
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776106
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776950
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775290
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775320
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_15_028201.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776418
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_15_028201.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775548
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779716
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_15_028201.17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30774966
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775628
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30779176
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30781794
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775374
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_15_028201.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776260
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30782386
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_15_028201.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30776358
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_15_028201.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775594
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_15_028201.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775998
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777886
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30781626
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_15_028201.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775610
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_15_028201.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775776
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777218
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30777596
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_15_028201.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775042
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_15_028201.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_15_028201.30775030
</commentlist>
</conversation>
