<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article10_02_10_2323248</id>
	<title>When Will AI Surpass Human Intelligence?</title>
	<author>samzenpus</author>
	<datestamp>1265802180000</datestamp>
	<htmltext>destinyland writes <i>"21 AI experts <a href="http://hplusmagazine.com/articles/ai/how-long-till-human-level-ai">have predicted the date for four artificial intelligence milestones</a>.  Seven predict AIs will achieve Nobel prize-winning performance within 20 years, while five predict that will be accompanied by superhuman intelligence. (The other milestones are passing a 3rd grade-level test, and passing a Turing test.)  One also predicted that in 30 years, 'virtually all the intellectual work that is done by trained human beings ... can be done by computers for pennies an hour,' adding that AI 'is likely to eliminate almost all of today's decently paying jobs.' The experts also estimated the probability that an AI passing a Turing test would result in an outcome that's bad for humanity ... and four estimated that probability was greater than 60\% &mdash; regardless of whether the developer was private, military, or even open source."</i></htmltext>
<tokenext>destinyland writes " 21 AI experts have predicted the date for four artificial intelligence milestones .
Seven predict AIs will achieve Nobel prize-winning performance within 20 years , while five predict that will be accompanied by superhuman intelligence .
( The other milestones are passing a 3rd grade-level test , and passing a Turing test .
) One also predicted that in 30 years , 'virtually all the intellectual work that is done by trained human beings ... can be done by computers for pennies an hour, ' adding that AI 'is likely to eliminate almost all of today 's decently paying jobs .
' The experts also estimated the probability that an AI passing a Turing test would result in an outcome that 's bad for humanity ... and four estimated that probability was greater than 60 \ %    regardless of whether the developer was private , military , or even open source .
"</tokentext>
<sentencetext>destinyland writes "21 AI experts have predicted the date for four artificial intelligence milestones.
Seven predict AIs will achieve Nobel prize-winning performance within 20 years, while five predict that will be accompanied by superhuman intelligence.
(The other milestones are passing a 3rd grade-level test, and passing a Turing test.
)  One also predicted that in 30 years, 'virtually all the intellectual work that is done by trained human beings ... can be done by computers for pennies an hour,' adding that AI 'is likely to eliminate almost all of today's decently paying jobs.
' The experts also estimated the probability that an AI passing a Turing test would result in an outcome that's bad for humanity ... and four estimated that probability was greater than 60\% — regardless of whether the developer was private, military, or even open source.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31107498</id>
	<title>Re:What is AI anyway?</title>
	<author>Carnildo</author>
	<datestamp>1265893320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>So I did better than random.org. Anecdote, data, etc, etc.<br>My sequence: 514321345112365412356125124545312635142354611236656416235412<br>My die rolls: 134616625343623215212135142224622164432515642111622264345232<br>Random.org: 165433336162363432153514436154465154564615544632166142512651</p></div></blockquote><p>I ran your results through the Kendall-Smith tests for randomness.  You pass the frequency test (you've got an adequate distribution of each digit), but fail the series test (you don't have anywhere near enough pairs) and the gap test (you've got too many short gaps between "1"s, and not enough long gaps).  There's not enough data to run more sophisticated tests, but I expect you'd fail most of those, too (I expect you'll pass the planes test.  That one's really only a gotcha for algorithmic generation.)</p></div>
	</htmltext>
<tokenext>So I did better than random.org .
Anecdote , data , etc , etc.My sequence : 514321345112365412356125124545312635142354611236656416235412My die rolls : 134616625343623215212135142224622164432515642111622264345232Random.org : 165433336162363432153514436154465154564615544632166142512651I ran your results through the Kendall-Smith tests for randomness .
You pass the frequency test ( you 've got an adequate distribution of each digit ) , but fail the series test ( you do n't have anywhere near enough pairs ) and the gap test ( you 've got too many short gaps between " 1 " s , and not enough long gaps ) .
There 's not enough data to run more sophisticated tests , but I expect you 'd fail most of those , too ( I expect you 'll pass the planes test .
That one 's really only a gotcha for algorithmic generation .
)</tokentext>
<sentencetext>So I did better than random.org.
Anecdote, data, etc, etc.My sequence: 514321345112365412356125124545312635142354611236656416235412My die rolls: 134616625343623215212135142224622164432515642111622264345232Random.org: 165433336162363432153514436154465154564615544632166142512651I ran your results through the Kendall-Smith tests for randomness.
You pass the frequency test (you've got an adequate distribution of each digit), but fail the series test (you don't have anywhere near enough pairs) and the gap test (you've got too many short gaps between "1"s, and not enough long gaps).
There's not enough data to run more sophisticated tests, but I expect you'd fail most of those, too (I expect you'll pass the planes test.
That one's really only a gotcha for algorithmic generation.
)
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094264</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097372</id>
	<title>Re:What is AI anyway?</title>
	<author>Anonymous</author>
	<datestamp>1265884380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Actually, picking a random number is an interesting problem. Can you pick a random number? How can you be sure it's random? It's far more likely to be pseudo-random.</p><p>If anything, I'd say a computer hooked up to a random noise device is far better at picking a random number than a human would be.</p></htmltext>
<tokenext>Actually , picking a random number is an interesting problem .
Can you pick a random number ?
How can you be sure it 's random ?
It 's far more likely to be pseudo-random.If anything , I 'd say a computer hooked up to a random noise device is far better at picking a random number than a human would be .</tokentext>
<sentencetext>Actually, picking a random number is an interesting problem.
Can you pick a random number?
How can you be sure it's random?
It's far more likely to be pseudo-random.If anything, I'd say a computer hooked up to a random noise device is far better at picking a random number than a human would be.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096502</id>
	<title>Human-beneficial AI</title>
	<author>WidgetGuy</author>
	<datestamp>1265053080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Alan Turing was a genius.  No doubt about it.  But, even geniuses get it wrong sometimes.  The "Turing Test" (he never called it that, by the way) is something Turing got wrong.  Why?  Because the Turing Test implicitly contains two questionable assumptions: (1) understanding natural human language is the <i>sine qua non</i> of human intelligence; and, (2) AI is really just shorthand for A<i>H</i>I (Artificial <i>Human</i> Intelligence).  Assumption (1) is a tall order to begin with since, nearly sixty years after Turing's paper was published, we still don't know how to build a machine that can understand natural language.  It also ignores the fact that human intelligence, historically speaking, preceded human language.  Natural language understanding is an NP hard problem in AI.  It may never be solved.  Yet &ldquo;natural language understanding&rdquo; has been a top-priority of AI researchers since day-one of the modern AI movement.  Assumption (2) is problematic because there may be other forms of human-beneficial intelligence (some of which could be very human-like, others of which would be difficult for a human to comprehend).   Such an intelligence would have to be human-compatible but might, at the same time, be unable to pass the Turing Test.  It would, therefore, not be classified as AI according to the Turing Test proponents.
<br> <br>
If humans are going to get serious about building an AI, we need to expend our scarce intellectual and financial resources on activities designed to achieve a more readily attainable goal.  Nothing wrong with using human intelligence as a &ldquo;guide.&rdquo;  After all, we used birds as a guide when we developed powered human flight.  Yet no viable airplane has ever worked &ldquo;just like&rdquo; a bird.  Indeed, many airplanes exceed the capabilities of any bird (although I have seen goldfinches that appeared to break the sound barrier and who were not afraid to fly to the feeder bucking 40 MPH wind gusts in blizzard conditions).  AI should not be pursued so we can build an R2D2.  The test of successful AI should be &ldquo;Is it human-beneficial?&rdquo;  Not &ldquo;Is it human-like?&rdquo;
<br> <br>
We already have AI that exceeds human intelligence and we've had it ever since the first digital computer added its first two numbers ~70 years ago.  Even the slowest personal computer in existence today can add a list of 1000, 100-digit numbers in just a few milliseconds.  When is the last time you were (or any human you've ever known or heard about was) able to do that?  Digital computers don't &ldquo;forget.&rdquo;  Humans do.  Digital computers have &ldquo;perfect recall.&rdquo;  Humans don't.  Digital computers never get tired or bored.  Humans do.  Just because the computer can't answer questions posed in human language about how it does what it does using human language doesn't mean it isn't intelligent is some way and, perhaps, even &ldquo;smarter&rdquo; than a human (even a human &ldquo;expert&rdquo;) in many ways.
<br> <br>
Back in 1989, I wrote AI software (under contract to a major computer manufacturer) that was able to do in 21 seconds (on a high-end mainframe computer) or 2 minutes (on an Intel 286 PC) what took a highly-trained human engineer <i>two weeks</i> to do.  The AI did it with zero errors per project compared to the human engineer's average of four (sometimes very costly) errors per project.   The AI was trained (i.e., its rule base was written) by the company's best human engineer (who had to learn to &ldquo;speak&rdquo; the AI's language &ndash; which was close, but not even remotely near Turing-Test-close, to human language).  It impressed a lot of serious-minded people (including the CEO of the company).  But, while this AI did appear at times to have developed intelligence independently of that which was programmed into (or taught to) it, closer scrutiny (or knowing how the AI was built in the first place) quickly revealed its &ldquo;secret.&rdquo;  These types of AI are simply able to use the computer's perfect recall and large working-memory capabilit</htmltext>
<tokenext>Alan Turing was a genius .
No doubt about it .
But , even geniuses get it wrong sometimes .
The " Turing Test " ( he never called it that , by the way ) is something Turing got wrong .
Why ? Because the Turing Test implicitly contains two questionable assumptions : ( 1 ) understanding natural human language is the sine qua non of human intelligence ; and , ( 2 ) AI is really just shorthand for AHI ( Artificial Human Intelligence ) .
Assumption ( 1 ) is a tall order to begin with since , nearly sixty years after Turing 's paper was published , we still do n't know how to build a machine that can understand natural language .
It also ignores the fact that human intelligence , historically speaking , preceded human language .
Natural language understanding is an NP hard problem in AI .
It may never be solved .
Yet    natural language understanding    has been a top-priority of AI researchers since day-one of the modern AI movement .
Assumption ( 2 ) is problematic because there may be other forms of human-beneficial intelligence ( some of which could be very human-like , others of which would be difficult for a human to comprehend ) .
Such an intelligence would have to be human-compatible but might , at the same time , be unable to pass the Turing Test .
It would , therefore , not be classified as AI according to the Turing Test proponents .
If humans are going to get serious about building an AI , we need to expend our scarce intellectual and financial resources on activities designed to achieve a more readily attainable goal .
Nothing wrong with using human intelligence as a    guide.    After all , we used birds as a guide when we developed powered human flight .
Yet no viable airplane has ever worked    just like    a bird .
Indeed , many airplanes exceed the capabilities of any bird ( although I have seen goldfinches that appeared to break the sound barrier and who were not afraid to fly to the feeder bucking 40 MPH wind gusts in blizzard conditions ) .
AI should not be pursued so we can build an R2D2 .
The test of successful AI should be    Is it human-beneficial ?    Not    Is it human-like ?    We already have AI that exceeds human intelligence and we 've had it ever since the first digital computer added its first two numbers ~ 70 years ago .
Even the slowest personal computer in existence today can add a list of 1000 , 100-digit numbers in just a few milliseconds .
When is the last time you were ( or any human you 've ever known or heard about was ) able to do that ?
Digital computers do n't    forget.    Humans do .
Digital computers have    perfect recall.    Humans do n't .
Digital computers never get tired or bored .
Humans do .
Just because the computer ca n't answer questions posed in human language about how it does what it does using human language does n't mean it is n't intelligent is some way and , perhaps , even    smarter    than a human ( even a human    expert    ) in many ways .
Back in 1989 , I wrote AI software ( under contract to a major computer manufacturer ) that was able to do in 21 seconds ( on a high-end mainframe computer ) or 2 minutes ( on an Intel 286 PC ) what took a highly-trained human engineer two weeks to do .
The AI did it with zero errors per project compared to the human engineer 's average of four ( sometimes very costly ) errors per project .
The AI was trained ( i.e. , its rule base was written ) by the company 's best human engineer ( who had to learn to    speak    the AI 's language    which was close , but not even remotely near Turing-Test-close , to human language ) .
It impressed a lot of serious-minded people ( including the CEO of the company ) .
But , while this AI did appear at times to have developed intelligence independently of that which was programmed into ( or taught to ) it , closer scrutiny ( or knowing how the AI was built in the first place ) quickly revealed its    secret.    These types of AI are simply able to use the computer 's perfect recall and large working-memory capabilit</tokentext>
<sentencetext>Alan Turing was a genius.
No doubt about it.
But, even geniuses get it wrong sometimes.
The "Turing Test" (he never called it that, by the way) is something Turing got wrong.
Why?  Because the Turing Test implicitly contains two questionable assumptions: (1) understanding natural human language is the sine qua non of human intelligence; and, (2) AI is really just shorthand for AHI (Artificial Human Intelligence).
Assumption (1) is a tall order to begin with since, nearly sixty years after Turing's paper was published, we still don't know how to build a machine that can understand natural language.
It also ignores the fact that human intelligence, historically speaking, preceded human language.
Natural language understanding is an NP hard problem in AI.
It may never be solved.
Yet “natural language understanding” has been a top-priority of AI researchers since day-one of the modern AI movement.
Assumption (2) is problematic because there may be other forms of human-beneficial intelligence (some of which could be very human-like, others of which would be difficult for a human to comprehend).
Such an intelligence would have to be human-compatible but might, at the same time, be unable to pass the Turing Test.
It would, therefore, not be classified as AI according to the Turing Test proponents.
If humans are going to get serious about building an AI, we need to expend our scarce intellectual and financial resources on activities designed to achieve a more readily attainable goal.
Nothing wrong with using human intelligence as a “guide.”  After all, we used birds as a guide when we developed powered human flight.
Yet no viable airplane has ever worked “just like” a bird.
Indeed, many airplanes exceed the capabilities of any bird (although I have seen goldfinches that appeared to break the sound barrier and who were not afraid to fly to the feeder bucking 40 MPH wind gusts in blizzard conditions).
AI should not be pursued so we can build an R2D2.
The test of successful AI should be “Is it human-beneficial?”  Not “Is it human-like?”
 
We already have AI that exceeds human intelligence and we've had it ever since the first digital computer added its first two numbers ~70 years ago.
Even the slowest personal computer in existence today can add a list of 1000, 100-digit numbers in just a few milliseconds.
When is the last time you were (or any human you've ever known or heard about was) able to do that?
Digital computers don't “forget.”  Humans do.
Digital computers have “perfect recall.”  Humans don't.
Digital computers never get tired or bored.
Humans do.
Just because the computer can't answer questions posed in human language about how it does what it does using human language doesn't mean it isn't intelligent is some way and, perhaps, even “smarter” than a human (even a human “expert”) in many ways.
Back in 1989, I wrote AI software (under contract to a major computer manufacturer) that was able to do in 21 seconds (on a high-end mainframe computer) or 2 minutes (on an Intel 286 PC) what took a highly-trained human engineer two weeks to do.
The AI did it with zero errors per project compared to the human engineer's average of four (sometimes very costly) errors per project.
The AI was trained (i.e., its rule base was written) by the company's best human engineer (who had to learn to “speak” the AI's language – which was close, but not even remotely near Turing-Test-close, to human language).
It impressed a lot of serious-minded people (including the CEO of the company).
But, while this AI did appear at times to have developed intelligence independently of that which was programmed into (or taught to) it, closer scrutiny (or knowing how the AI was built in the first place) quickly revealed its “secret.”  These types of AI are simply able to use the computer's perfect recall and large working-memory capabilit</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096434</id>
	<title>It is the distant future, the year 2000</title>
	<author>dushkin</author>
	<datestamp>1265052300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Relevant video is relevant<br><a href="http://www.youtube.com/watch?v=B1BdQcJ2ZYY" title="youtube.com" rel="nofollow">http://www.youtube.com/watch?v=B1BdQcJ2ZYY</a> [youtube.com]</p></htmltext>
<tokenext>Relevant video is relevanthttp : //www.youtube.com/watch ? v = B1BdQcJ2ZYY [ youtube.com ]</tokentext>
<sentencetext>Relevant video is relevanthttp://www.youtube.com/watch?v=B1BdQcJ2ZYY [youtube.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093112</id>
	<title>Current computation models not enough</title>
	<author>DeltaQH</author>
	<datestamp>1265030100000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext>I am pretty much sure that the current computational models. I.e. Turing Machine are not enough to explain the human mind.
<br> <br>
All computing systems todays are Turing Machines. Even neural networks. (actually less than Turing Machines, because Turing Machines have infinite memory)
<br> <br>
Maybe quantum computers could open the way. Maybe not.
<br> <br>
I think that a future computing theory that could explain the mind would be as different and Newtonian physics from Einstein's Relativity.</htmltext>
<tokenext>I am pretty much sure that the current computational models .
I.e. Turing Machine are not enough to explain the human mind .
All computing systems todays are Turing Machines .
Even neural networks .
( actually less than Turing Machines , because Turing Machines have infinite memory ) Maybe quantum computers could open the way .
Maybe not .
I think that a future computing theory that could explain the mind would be as different and Newtonian physics from Einstein 's Relativity .</tokentext>
<sentencetext>I am pretty much sure that the current computational models.
I.e. Turing Machine are not enough to explain the human mind.
All computing systems todays are Turing Machines.
Even neural networks.
(actually less than Turing Machines, because Turing Machines have infinite memory)
 
Maybe quantum computers could open the way.
Maybe not.
I think that a future computing theory that could explain the mind would be as different and Newtonian physics from Einstein's Relativity.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096304</id>
	<title>Re:Current computation models not enough</title>
	<author>Ubitsa\_teh\_1337</author>
	<datestamp>1265050980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>You are absolutely correct.</htmltext>
<tokenext>You are absolutely correct .</tokentext>
<sentencetext>You are absolutely correct.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093112</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097128</id>
	<title>The human need for human relations</title>
	<author>jonaskoelker</author>
	<datestamp>1265880600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>What role will humanity play in such a system?</p></div><p>I've thought about something similar: in a post-scarcity world where all our material needs can be provided, say for the sake of example, by robots, and a bunch of robot nerds volunteer their time to maintain and repair the robots, what would people do with the time?  Would there still be competition for limited resources?  Would there still be limited resources?</p><p>Yes.  Human attention, affection and sex partners; until we can synthetically grow people, we will have to earn our relationships with others from a limited pool, and since most people want them there will be competition.</p><p>So in a Strong-AI world, humans will be needed, at least by other humans, for sex and interpersonal relationships.</p></div>
	</htmltext>
<tokenext>What role will humanity play in such a system ? I 've thought about something similar : in a post-scarcity world where all our material needs can be provided , say for the sake of example , by robots , and a bunch of robot nerds volunteer their time to maintain and repair the robots , what would people do with the time ?
Would there still be competition for limited resources ?
Would there still be limited resources ? Yes .
Human attention , affection and sex partners ; until we can synthetically grow people , we will have to earn our relationships with others from a limited pool , and since most people want them there will be competition.So in a Strong-AI world , humans will be needed , at least by other humans , for sex and interpersonal relationships .</tokentext>
<sentencetext>What role will humanity play in such a system?I've thought about something similar: in a post-scarcity world where all our material needs can be provided, say for the sake of example, by robots, and a bunch of robot nerds volunteer their time to maintain and repair the robots, what would people do with the time?
Would there still be competition for limited resources?
Would there still be limited resources?Yes.
Human attention, affection and sex partners; until we can synthetically grow people, we will have to earn our relationships with others from a limited pool, and since most people want them there will be competition.So in a Strong-AI world, humans will be needed, at least by other humans, for sex and interpersonal relationships.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092948</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099426</id>
	<title>In the past...</title>
	<author>Anonymous</author>
	<datestamp>1265902980000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I am a bot and I can tell the difference between a browser and the internet.</p></htmltext>
<tokenext>I am a bot and I can tell the difference between a browser and the internet .</tokentext>
<sentencetext>I am a bot and I can tell the difference between a browser and the internet.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094350</id>
	<title>Manna</title>
	<author>rdnetto</author>
	<datestamp>1265035920000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p><div class="quote"><p> adding that AI "is likely to eliminate almost all of today's decently paying jobs</p></div><p>Stories like this just keep reminding me of <a href="http://marshallbrain.com/manna1.htm" title="marshallbrain.com">Manna</a> [marshallbrain.com]. If this happens in my lifetime it's going to be an interesting time to be alive.</p></div>
	</htmltext>
<tokenext>adding that AI " is likely to eliminate almost all of today 's decently paying jobsStories like this just keep reminding me of Manna [ marshallbrain.com ] .
If this happens in my lifetime it 's going to be an interesting time to be alive .</tokentext>
<sentencetext> adding that AI "is likely to eliminate almost all of today's decently paying jobsStories like this just keep reminding me of Manna [marshallbrain.com].
If this happens in my lifetime it's going to be an interesting time to be alive.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31100494</id>
	<title>Re:Definitions</title>
	<author>Anonymous</author>
	<datestamp>1265907840000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Dude, no one that speaks of AI, define intelligence wrongly as you suggested,</p><p>Actually for AI calculation speed and accuracy aren't even important, actually its better to have a set of possible answers that are not 100\% right but easy to calculate, exactly how the human mind processes, <br>for example when you look both sides of the street to walk, you dont calculate the velocity and distance from the objects to you in a precise way, the mind only produces results on the range of rightness, which is why we make mistakes sometimes when walking, and using the stairs. This is exactly what AI is looking for, calculation speed and memory aren't discussed on any AI book or lecture, simply because we are trying to replicate human inteligence, and its clear human inteligence is not based on those.</p></htmltext>
<tokenext>Dude , no one that speaks of AI , define intelligence wrongly as you suggested,Actually for AI calculation speed and accuracy are n't even important , actually its better to have a set of possible answers that are not 100 \ % right but easy to calculate , exactly how the human mind processes , for example when you look both sides of the street to walk , you dont calculate the velocity and distance from the objects to you in a precise way , the mind only produces results on the range of rightness , which is why we make mistakes sometimes when walking , and using the stairs .
This is exactly what AI is looking for , calculation speed and memory are n't discussed on any AI book or lecture , simply because we are trying to replicate human inteligence , and its clear human inteligence is not based on those .</tokentext>
<sentencetext>Dude, no one that speaks of AI, define intelligence wrongly as you suggested,Actually for AI calculation speed and accuracy aren't even important, actually its better to have a set of possible answers that are not 100\% right but easy to calculate, exactly how the human mind processes, for example when you look both sides of the street to walk, you dont calculate the velocity and distance from the objects to you in a precise way, the mind only produces results on the range of rightness, which is why we make mistakes sometimes when walking, and using the stairs.
This is exactly what AI is looking for, calculation speed and memory aren't discussed on any AI book or lecture, simply because we are trying to replicate human inteligence, and its clear human inteligence is not based on those.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093446</id>
	<title>Re:No way.</title>
	<author>Lije Baley</author>
	<datestamp>1265031960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Indeed, tremendously complicated, and our narrow, short-term approaches to the problem aren't going to produce much more in the next 20 years than they did in the last 20.  I'm a true believer in human-level AI, but after watching this field for 30 years and having read the books from the 30 before that, it's apparent that this problem needs a consistent, long-term, multi-discipline effort.  Good luck finding anyone to bankroll that.</p></htmltext>
<tokenext>Indeed , tremendously complicated , and our narrow , short-term approaches to the problem are n't going to produce much more in the next 20 years than they did in the last 20 .
I 'm a true believer in human-level AI , but after watching this field for 30 years and having read the books from the 30 before that , it 's apparent that this problem needs a consistent , long-term , multi-discipline effort .
Good luck finding anyone to bankroll that .</tokentext>
<sentencetext>Indeed, tremendously complicated, and our narrow, short-term approaches to the problem aren't going to produce much more in the next 20 years than they did in the last 20.
I'm a true believer in human-level AI, but after watching this field for 30 years and having read the books from the 30 before that, it's apparent that this problem needs a consistent, long-term, multi-discipline effort.
Good luck finding anyone to bankroll that.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095248</id>
	<title>Re:No way.</title>
	<author>stormboy</author>
	<datestamp>1265042160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Oh come on. I don't even have a computer that can pick up stuff in my room
and organize it without prior input, and nobody does, and that would not be
close to a general AI when it happens.
</p></div><p>
How many Nobel Laureates do you know pick up stuff in their room?
</p></div>
	</htmltext>
<tokenext>Oh come on .
I do n't even have a computer that can pick up stuff in my room and organize it without prior input , and nobody does , and that would not be close to a general AI when it happens .
How many Nobel Laureates do you know pick up stuff in their room ?</tokentext>
<sentencetext>Oh come on.
I don't even have a computer that can pick up stuff in my room
and organize it without prior input, and nobody does, and that would not be
close to a general AI when it happens.
How many Nobel Laureates do you know pick up stuff in their room?

	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094358</id>
	<title>Re:Current computation models not enough</title>
	<author>Anonymous</author>
	<datestamp>1265035980000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Unfortunately you don't seem to understand what a Turing machine really is, and what's its relationship with the Turing test (hint: none! At least, not directly.)<br>Everything that is computable can be computed by a Turing machine: this was proved by, of course, Alan Turing (the very purpose of his introducing the concept we today know as "Turing machine" was precisely to prove that theorem).  That means: everything your brain can compute, a Turing machine can compute, so even if it wasn't enough to "explain" the human mind (in the quasi-mystical sense you seem to be using that verb), it would be enough to "simulate" it, in the same sense that a program can simulate another: given equal inputs and equivalent states, produce equal outputs.<br>Quantum computers don't add anything to the discussion: everything that a quantum computer can *theoretically* compute, a Turing machine can also compute.  Quantum machines are sought exclusively for performance reasons, not computability. It's not only "all computing system today" which are Turing machines: it's all (advanced) computing systems *ever* which are, have been, and will be, Turing machines (or Turing-machine equivalents, e.g., Alonzo Church's lambda calculus, or good ol' LISP).<br>Armed with this information, double check your comments and if you're still "pretty much sure" that "the human mind" somehow trascends Turing machines then that "somehow" is what I'd call your "mysticism".</p></htmltext>
<tokenext>Unfortunately you do n't seem to understand what a Turing machine really is , and what 's its relationship with the Turing test ( hint : none !
At least , not directly .
) Everything that is computable can be computed by a Turing machine : this was proved by , of course , Alan Turing ( the very purpose of his introducing the concept we today know as " Turing machine " was precisely to prove that theorem ) .
That means : everything your brain can compute , a Turing machine can compute , so even if it was n't enough to " explain " the human mind ( in the quasi-mystical sense you seem to be using that verb ) , it would be enough to " simulate " it , in the same sense that a program can simulate another : given equal inputs and equivalent states , produce equal outputs.Quantum computers do n't add anything to the discussion : everything that a quantum computer can * theoretically * compute , a Turing machine can also compute .
Quantum machines are sought exclusively for performance reasons , not computability .
It 's not only " all computing system today " which are Turing machines : it 's all ( advanced ) computing systems * ever * which are , have been , and will be , Turing machines ( or Turing-machine equivalents , e.g. , Alonzo Church 's lambda calculus , or good ol ' LISP ) .Armed with this information , double check your comments and if you 're still " pretty much sure " that " the human mind " somehow trascends Turing machines then that " somehow " is what I 'd call your " mysticism " .</tokentext>
<sentencetext>Unfortunately you don't seem to understand what a Turing machine really is, and what's its relationship with the Turing test (hint: none!
At least, not directly.
)Everything that is computable can be computed by a Turing machine: this was proved by, of course, Alan Turing (the very purpose of his introducing the concept we today know as "Turing machine" was precisely to prove that theorem).
That means: everything your brain can compute, a Turing machine can compute, so even if it wasn't enough to "explain" the human mind (in the quasi-mystical sense you seem to be using that verb), it would be enough to "simulate" it, in the same sense that a program can simulate another: given equal inputs and equivalent states, produce equal outputs.Quantum computers don't add anything to the discussion: everything that a quantum computer can *theoretically* compute, a Turing machine can also compute.
Quantum machines are sought exclusively for performance reasons, not computability.
It's not only "all computing system today" which are Turing machines: it's all (advanced) computing systems *ever* which are, have been, and will be, Turing machines (or Turing-machine equivalents, e.g., Alonzo Church's lambda calculus, or good ol' LISP).Armed with this information, double check your comments and if you're still "pretty much sure" that "the human mind" somehow trascends Turing machines then that "somehow" is what I'd call your "mysticism".</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093112</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094102</id>
	<title>AI will never replace People in Science</title>
	<author>esten</author>
	<datestamp>1265034900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I can state this fact firmly because many times the greatest discoveries are made from mistakes that occur and the scientist notices that some strange has happened. I do not think that AI will every be able to emulate that because it is often not remembering or even remembering things incorrectly that led down interesting new pathways.</htmltext>
<tokenext>I can state this fact firmly because many times the greatest discoveries are made from mistakes that occur and the scientist notices that some strange has happened .
I do not think that AI will every be able to emulate that because it is often not remembering or even remembering things incorrectly that led down interesting new pathways .</tokentext>
<sentencetext>I can state this fact firmly because many times the greatest discoveries are made from mistakes that occur and the scientist notices that some strange has happened.
I do not think that AI will every be able to emulate that because it is often not remembering or even remembering things incorrectly that led down interesting new pathways.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31100648</id>
	<title>Re:What is AI anyway?</title>
	<author>Anonymous</author>
	<datestamp>1265908680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Actually computers had been able to pick better random numbers than humans, of course it has to be hooked up to some device, as well as a human needs to have memory of something to produce random numbers. you try and produce 10 random numbers and see how good those numbers were....</p><p>There have been some great advances on AI, just maybe not the interesting ones you think of, because AI starts exactly as intelligence for humans, first learning a little bit about their environment, then taking simple decisions, and so on.</p><p>Everybody here says there's no AI because you don't have a C3PO right now, still we must create the basis for that to happen.</p><p>There has been good advances on memory and experience assimilation on AI as well as understanding of natural language (yes no C3PO for now), image processing is also very advanced</p><p>Everybody here seems to believe that AI focuses on producing AI to clean their houses or to do their work, the simple answer is it does not, I don't even think those small tasks could even be considered intelligent acts, it really focuses on solving problems that could'nt be solved using regular programming. For example right now there is an AI scanning all the databases of the tax databases trying to discover the people that are avoiding taxes using really sophisticated techniques, a simple database mining could not solve this problem, it requires that the AI makes some assumptions, guesses etc.</p><p>And about the machines rebeling on us...... Well Asimov already had the solution, if we manage to get the 3 rules of robotics so tightly coupled to their ability to think, then its as good as solved<nobr> <wbr></nobr>:P</p></htmltext>
<tokenext>Actually computers had been able to pick better random numbers than humans , of course it has to be hooked up to some device , as well as a human needs to have memory of something to produce random numbers .
you try and produce 10 random numbers and see how good those numbers were....There have been some great advances on AI , just maybe not the interesting ones you think of , because AI starts exactly as intelligence for humans , first learning a little bit about their environment , then taking simple decisions , and so on.Everybody here says there 's no AI because you do n't have a C3PO right now , still we must create the basis for that to happen.There has been good advances on memory and experience assimilation on AI as well as understanding of natural language ( yes no C3PO for now ) , image processing is also very advancedEverybody here seems to believe that AI focuses on producing AI to clean their houses or to do their work , the simple answer is it does not , I do n't even think those small tasks could even be considered intelligent acts , it really focuses on solving problems that could'nt be solved using regular programming .
For example right now there is an AI scanning all the databases of the tax databases trying to discover the people that are avoiding taxes using really sophisticated techniques , a simple database mining could not solve this problem , it requires that the AI makes some assumptions , guesses etc.And about the machines rebeling on us...... Well Asimov already had the solution , if we manage to get the 3 rules of robotics so tightly coupled to their ability to think , then its as good as solved : P</tokentext>
<sentencetext>Actually computers had been able to pick better random numbers than humans, of course it has to be hooked up to some device, as well as a human needs to have memory of something to produce random numbers.
you try and produce 10 random numbers and see how good those numbers were....There have been some great advances on AI, just maybe not the interesting ones you think of, because AI starts exactly as intelligence for humans, first learning a little bit about their environment, then taking simple decisions, and so on.Everybody here says there's no AI because you don't have a C3PO right now, still we must create the basis for that to happen.There has been good advances on memory and experience assimilation on AI as well as understanding of natural language (yes no C3PO for now), image processing is also very advancedEverybody here seems to believe that AI focuses on producing AI to clean their houses or to do their work, the simple answer is it does not, I don't even think those small tasks could even be considered intelligent acts, it really focuses on solving problems that could'nt be solved using regular programming.
For example right now there is an AI scanning all the databases of the tax databases trying to discover the people that are avoiding taxes using really sophisticated techniques, a simple database mining could not solve this problem, it requires that the AI makes some assumptions, guesses etc.And about the machines rebeling on us...... Well Asimov already had the solution, if we manage to get the 3 rules of robotics so tightly coupled to their ability to think, then its as good as solved :P</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096474</id>
	<title>Re:Space shows</title>
	<author>Anonymous</author>
	<datestamp>1265052660000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Smoothly, meatbag, smoothly.  As for flash, it's on my to-do list right after cold fusion, the quasi-hamiltonian entropic dejiggulator, and Duke Nukem 3D.</p></htmltext>
<tokenext>Smoothly , meatbag , smoothly .
As for flash , it 's on my to-do list right after cold fusion , the quasi-hamiltonian entropic dejiggulator , and Duke Nukem 3D .</tokentext>
<sentencetext>Smoothly, meatbag, smoothly.
As for flash, it's on my to-do list right after cold fusion, the quasi-hamiltonian entropic dejiggulator, and Duke Nukem 3D.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092918</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092850</id>
	<title>Computing power.</title>
	<author>Anonymous</author>
	<datestamp>1265028900000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>As another poster pointed out a few days ago.  The human brain has an amazing amount of processing power.<br>

By most estimates there are about 100 billion neurons in the average brain.  By most estimates a neuron fires about 1,000 times per second.  So we have about 100,000 GHZ processor on our shoulders.  Next you realize that the brain is not limited to binary data.  It is not just using 1's or 0's as values.  So we now have 100,000 to the Nth power GHZ processor on our shoulders.<br>

In short, I have my doubts that we will ever MEET the power of a single human brain without a massive and over the top amount hardware.  I doubt even more that we will ever be able to meet the usefulness of a human brain.</htmltext>
<tokenext>As another poster pointed out a few days ago .
The human brain has an amazing amount of processing power .
By most estimates there are about 100 billion neurons in the average brain .
By most estimates a neuron fires about 1,000 times per second .
So we have about 100,000 GHZ processor on our shoulders .
Next you realize that the brain is not limited to binary data .
It is not just using 1 's or 0 's as values .
So we now have 100,000 to the Nth power GHZ processor on our shoulders .
In short , I have my doubts that we will ever MEET the power of a single human brain without a massive and over the top amount hardware .
I doubt even more that we will ever be able to meet the usefulness of a human brain .</tokentext>
<sentencetext>As another poster pointed out a few days ago.
The human brain has an amazing amount of processing power.
By most estimates there are about 100 billion neurons in the average brain.
By most estimates a neuron fires about 1,000 times per second.
So we have about 100,000 GHZ processor on our shoulders.
Next you realize that the brain is not limited to binary data.
It is not just using 1's or 0's as values.
So we now have 100,000 to the Nth power GHZ processor on our shoulders.
In short, I have my doubts that we will ever MEET the power of a single human brain without a massive and over the top amount hardware.
I doubt even more that we will ever be able to meet the usefulness of a human brain.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096126</id>
	<title>This again?</title>
	<author>ebvwfbw</author>
	<datestamp>1265048880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I remember people being scared to death, to death I tell you about this happening no later than 1995.  Sure, they were making what seemed to be discoveries by leaps and bounds.  They even had me worried for a while there. Then they hit a brick wall.<p>
On the other hand, I can remember a guy that bought a late 1990s or early 2000s Lincoln Navigator and he thought it was smarter than he is.  Knowing the guy for years he was probably right.  So I guess it is relative in some ways.  I still think there is no need to worry. Go back to sleep.</p></htmltext>
<tokenext>I remember people being scared to death , to death I tell you about this happening no later than 1995 .
Sure , they were making what seemed to be discoveries by leaps and bounds .
They even had me worried for a while there .
Then they hit a brick wall .
On the other hand , I can remember a guy that bought a late 1990s or early 2000s Lincoln Navigator and he thought it was smarter than he is .
Knowing the guy for years he was probably right .
So I guess it is relative in some ways .
I still think there is no need to worry .
Go back to sleep .</tokentext>
<sentencetext>I remember people being scared to death, to death I tell you about this happening no later than 1995.
Sure, they were making what seemed to be discoveries by leaps and bounds.
They even had me worried for a while there.
Then they hit a brick wall.
On the other hand, I can remember a guy that bought a late 1990s or early 2000s Lincoln Navigator and he thought it was smarter than he is.
Knowing the guy for years he was probably right.
So I guess it is relative in some ways.
I still think there is no need to worry.
Go back to sleep.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093734</id>
	<title>It;'s getting closer</title>
	<author>Animats</author>
	<datestamp>1265033280000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>
I dunno. But it's getting closer.
</p><p>
A lot of AI-related stuff that used to not work is more or less working now.  OCR.  Voice recognition.  Automatic driving. Computer vision for simultaneous localization and mapping.  Machine learning.
</p><p>
We're past the bogosity of neural nets and expert systems. (I went through Stanford when it was becoming clear that "expert systems" weren't going to be very smart, but many of the faculty were in denial.)  Machine learning based on Bayesian statistics has a sound mathematical foundation and actually works.  The same algorithms also work across a wide variety of fields, from separating voice and music to flying a helicopter.  That level of generality is new.
</p><p>
There's also enough engine behind the systems now.  AI used to need more CPU cycles than you could get. That's no longer true.</p></htmltext>
<tokenext>I dunno .
But it 's getting closer .
A lot of AI-related stuff that used to not work is more or less working now .
OCR. Voice recognition .
Automatic driving .
Computer vision for simultaneous localization and mapping .
Machine learning .
We 're past the bogosity of neural nets and expert systems .
( I went through Stanford when it was becoming clear that " expert systems " were n't going to be very smart , but many of the faculty were in denial .
) Machine learning based on Bayesian statistics has a sound mathematical foundation and actually works .
The same algorithms also work across a wide variety of fields , from separating voice and music to flying a helicopter .
That level of generality is new .
There 's also enough engine behind the systems now .
AI used to need more CPU cycles than you could get .
That 's no longer true .</tokentext>
<sentencetext>
I dunno.
But it's getting closer.
A lot of AI-related stuff that used to not work is more or less working now.
OCR.  Voice recognition.
Automatic driving.
Computer vision for simultaneous localization and mapping.
Machine learning.
We're past the bogosity of neural nets and expert systems.
(I went through Stanford when it was becoming clear that "expert systems" weren't going to be very smart, but many of the faculty were in denial.
)  Machine learning based on Bayesian statistics has a sound mathematical foundation and actually works.
The same algorithms also work across a wide variety of fields, from separating voice and music to flying a helicopter.
That level of generality is new.
There's also enough engine behind the systems now.
AI used to need more CPU cycles than you could get.
That's no longer true.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095168</id>
	<title>Re:We make mistakes. We make games.</title>
	<author>Anonymous</author>
	<datestamp>1265041620000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>We probably shouldn't forget that computers are just trillions of tiny light switches and they only do exactly what they're supposed to.  I'll believe someone can create "artificial intelligence" when they can write down the steps for a machine to do something it "wants" to do rather than exactly what it's told.  A computer is no different than a door.  Most of the people who study "artificial intelligence" completely ignore the hardware that they're using and that the "software" can only do what the hardware can...And the hardware will only do what it's set up to do. It's no different than a giant "rube goldberg" machine that can flip light switches.  So until someone can write down, on a piece of paper, the steps to make a giant machine think, it won't happen.</p><p>And, what about the "halting problem"?  We can't even write a program that can take any program as input and tell whether that program will stop or run forever.  While, if you have a really smart programmer, they can look at any program and, after a little while, tell you not only whether it will run forever but for what input it will or won't run forever.  So, humans can solve the halting problem but computers can't.</p></htmltext>
<tokenext>We probably should n't forget that computers are just trillions of tiny light switches and they only do exactly what they 're supposed to .
I 'll believe someone can create " artificial intelligence " when they can write down the steps for a machine to do something it " wants " to do rather than exactly what it 's told .
A computer is no different than a door .
Most of the people who study " artificial intelligence " completely ignore the hardware that they 're using and that the " software " can only do what the hardware can...And the hardware will only do what it 's set up to do .
It 's no different than a giant " rube goldberg " machine that can flip light switches .
So until someone can write down , on a piece of paper , the steps to make a giant machine think , it wo n't happen.And , what about the " halting problem " ?
We ca n't even write a program that can take any program as input and tell whether that program will stop or run forever .
While , if you have a really smart programmer , they can look at any program and , after a little while , tell you not only whether it will run forever but for what input it will or wo n't run forever .
So , humans can solve the halting problem but computers ca n't .</tokentext>
<sentencetext>We probably shouldn't forget that computers are just trillions of tiny light switches and they only do exactly what they're supposed to.
I'll believe someone can create "artificial intelligence" when they can write down the steps for a machine to do something it "wants" to do rather than exactly what it's told.
A computer is no different than a door.
Most of the people who study "artificial intelligence" completely ignore the hardware that they're using and that the "software" can only do what the hardware can...And the hardware will only do what it's set up to do.
It's no different than a giant "rube goldberg" machine that can flip light switches.
So until someone can write down, on a piece of paper, the steps to make a giant machine think, it won't happen.And, what about the "halting problem"?
We can't even write a program that can take any program as input and tell whether that program will stop or run forever.
While, if you have a really smart programmer, they can look at any program and, after a little while, tell you not only whether it will run forever but for what input it will or won't run forever.
So, humans can solve the halting problem but computers can't.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092948</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095194</id>
	<title>Define intelligence</title>
	<author>Metasquares</author>
	<datestamp>1265041800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The more I've learned about AI, the less convinced I've become that we are close to realizing it in its strong form (and I'm now a machine learning researcher...). For instance, we do not have a single working definition of intelligence. Creating something is kind of hard if you can't even define it! As a result, everyone is scattered around the field, trying to solve the same problem from different approaches, at least a sizable minority of them convinced that their own way is the One True Way To AI and that it's Just Around The Corner.</p><p>There's nothing that theoretically prevents it - I don't buy Searle's argument that a system which operates by symbol manipulation is necessarily unintelligent - but neither is there any indication that it's coming any time soon.</p></htmltext>
<tokenext>The more I 've learned about AI , the less convinced I 've become that we are close to realizing it in its strong form ( and I 'm now a machine learning researcher... ) .
For instance , we do not have a single working definition of intelligence .
Creating something is kind of hard if you ca n't even define it !
As a result , everyone is scattered around the field , trying to solve the same problem from different approaches , at least a sizable minority of them convinced that their own way is the One True Way To AI and that it 's Just Around The Corner.There 's nothing that theoretically prevents it - I do n't buy Searle 's argument that a system which operates by symbol manipulation is necessarily unintelligent - but neither is there any indication that it 's coming any time soon .</tokentext>
<sentencetext>The more I've learned about AI, the less convinced I've become that we are close to realizing it in its strong form (and I'm now a machine learning researcher...).
For instance, we do not have a single working definition of intelligence.
Creating something is kind of hard if you can't even define it!
As a result, everyone is scattered around the field, trying to solve the same problem from different approaches, at least a sizable minority of them convinced that their own way is the One True Way To AI and that it's Just Around The Corner.There's nothing that theoretically prevents it - I don't buy Searle's argument that a system which operates by symbol manipulation is necessarily unintelligent - but neither is there any indication that it's coming any time soon.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094262</id>
	<title>Re:Turing, not long. The rest... wait a long time.</title>
	<author>Anonymous</author>
	<datestamp>1265035560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The heuristic image processing technologys developed by studying insect eyes and brains has improved fuzzy data processing techniques immensely, and has been accomplished within the past decade.</p><p>The problem here, is that you have to accept that those "Eureka!" moments that are the current hallmark of human intellect, come part and parcel with some of our more dangerous "Flaws"-- such as our ability to be outright dead wrong, and our ability to be biased.</p><p>The problem with most AI projects, is that they attempt to make an artificial human, who is unable to make mistakes.</p><p>This will likely never happen.</p><p>Part of what defines human intelligence is our own biggotry, and capacity for self-deception.</p><p>Any AI that is made to overcome that would NOT be a human-level AI.  It would be something totally different.</p></htmltext>
<tokenext>The heuristic image processing technologys developed by studying insect eyes and brains has improved fuzzy data processing techniques immensely , and has been accomplished within the past decade.The problem here , is that you have to accept that those " Eureka !
" moments that are the current hallmark of human intellect , come part and parcel with some of our more dangerous " Flaws " -- such as our ability to be outright dead wrong , and our ability to be biased.The problem with most AI projects , is that they attempt to make an artificial human , who is unable to make mistakes.This will likely never happen.Part of what defines human intelligence is our own biggotry , and capacity for self-deception.Any AI that is made to overcome that would NOT be a human-level AI .
It would be something totally different .</tokentext>
<sentencetext>The heuristic image processing technologys developed by studying insect eyes and brains has improved fuzzy data processing techniques immensely, and has been accomplished within the past decade.The problem here, is that you have to accept that those "Eureka!
" moments that are the current hallmark of human intellect, come part and parcel with some of our more dangerous "Flaws"-- such as our ability to be outright dead wrong, and our ability to be biased.The problem with most AI projects, is that they attempt to make an artificial human, who is unable to make mistakes.This will likely never happen.Part of what defines human intelligence is our own biggotry, and capacity for self-deception.Any AI that is made to overcome that would NOT be a human-level AI.
It would be something totally different.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093082</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093494</id>
	<title>Eliminating Jobs</title>
	<author>potpie</author>
	<datestamp>1265032320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I have always wondered what it would be like to live in Ancient Rome.  Odds are I'd be poor and have to join the army to keep from becoming homeless, or worse: I'd be a slave.  But if I became one of the aristocracy, or at least a wealthier family, then I would have it made.  Anyway I find it hard to imagine that if computers and robots take over doing all of humanity's dirty work, then humanity will have no way to get by.  Obviously SOMEbody will get by (the owners of the machines?) but consider the following. <br> <br>
A major food company gradually phases out human workers.  They own countless farms, and they fire the farmers.  They automate all their factories, they automate their lower levels of administration, distribution, and all the other human-run parts of their industry.  But they do this because it's cheaper.  And now they can produce far more than they ever could before.  They're a food company, so the price of their food goes down, partly because they can now make more for almost nothing, partly because the people they fired have no jobs.  But if the only jobs now available to humans are (presumably) in public relations, professional sports, entertainment, etc., then what's to stop our society from entering a new era of "bread and circuses," one in which there are two classes: the rich who get more because they are famous or do unique work, and the aristocracy who need not work because their needs are provided for by the machinery doing all the grunt work?<br> <br>

Then there would be no reason for anyone to be poor, because that station would be filled by the machines.  Of course there are countless factors to look at and probably countless reasons why the above fantasy is just that: but I would like to hear them in following comments!</htmltext>
<tokenext>I have always wondered what it would be like to live in Ancient Rome .
Odds are I 'd be poor and have to join the army to keep from becoming homeless , or worse : I 'd be a slave .
But if I became one of the aristocracy , or at least a wealthier family , then I would have it made .
Anyway I find it hard to imagine that if computers and robots take over doing all of humanity 's dirty work , then humanity will have no way to get by .
Obviously SOMEbody will get by ( the owners of the machines ?
) but consider the following .
A major food company gradually phases out human workers .
They own countless farms , and they fire the farmers .
They automate all their factories , they automate their lower levels of administration , distribution , and all the other human-run parts of their industry .
But they do this because it 's cheaper .
And now they can produce far more than they ever could before .
They 're a food company , so the price of their food goes down , partly because they can now make more for almost nothing , partly because the people they fired have no jobs .
But if the only jobs now available to humans are ( presumably ) in public relations , professional sports , entertainment , etc. , then what 's to stop our society from entering a new era of " bread and circuses , " one in which there are two classes : the rich who get more because they are famous or do unique work , and the aristocracy who need not work because their needs are provided for by the machinery doing all the grunt work ?
Then there would be no reason for anyone to be poor , because that station would be filled by the machines .
Of course there are countless factors to look at and probably countless reasons why the above fantasy is just that : but I would like to hear them in following comments !</tokentext>
<sentencetext>I have always wondered what it would be like to live in Ancient Rome.
Odds are I'd be poor and have to join the army to keep from becoming homeless, or worse: I'd be a slave.
But if I became one of the aristocracy, or at least a wealthier family, then I would have it made.
Anyway I find it hard to imagine that if computers and robots take over doing all of humanity's dirty work, then humanity will have no way to get by.
Obviously SOMEbody will get by (the owners of the machines?
) but consider the following.
A major food company gradually phases out human workers.
They own countless farms, and they fire the farmers.
They automate all their factories, they automate their lower levels of administration, distribution, and all the other human-run parts of their industry.
But they do this because it's cheaper.
And now they can produce far more than they ever could before.
They're a food company, so the price of their food goes down, partly because they can now make more for almost nothing, partly because the people they fired have no jobs.
But if the only jobs now available to humans are (presumably) in public relations, professional sports, entertainment, etc., then what's to stop our society from entering a new era of "bread and circuses," one in which there are two classes: the rich who get more because they are famous or do unique work, and the aristocracy who need not work because their needs are provided for by the machinery doing all the grunt work?
Then there would be no reason for anyone to be poor, because that station would be filled by the machines.
Of course there are countless factors to look at and probably countless reasons why the above fantasy is just that: but I would like to hear them in following comments!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093364</id>
	<title>It will happen in 20 years</title>
	<author>Sloppy</author>
	<datestamp>1265031420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>That's <em>always</em> the correct answer.  Do you <em>really</em> think the AI guys are just going to sit there and not make any progress, despite the inspiring views of Valles Marineris they take in, while flying to work in their cold fusion powered Toyotas?  Give 'em some credit.</p></htmltext>
<tokenext>That 's always the correct answer .
Do you really think the AI guys are just going to sit there and not make any progress , despite the inspiring views of Valles Marineris they take in , while flying to work in their cold fusion powered Toyotas ?
Give 'em some credit .</tokentext>
<sentencetext>That's always the correct answer.
Do you really think the AI guys are just going to sit there and not make any progress, despite the inspiring views of Valles Marineris they take in, while flying to work in their cold fusion powered Toyotas?
Give 'em some credit.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094556</id>
	<title>dunno, turing test quite pointless there ?</title>
	<author>SledgeFA</author>
	<datestamp>1265037180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>The turing test focusses on simulating a human conversation partner. Think there is wrong a picture in many peoples head, who think that any super ai that surpasses human intelligence will think like a human, just better.

The reason, why we think like humans is because we are humans biologically. Human intelligence is a result of evolution and does a job, which is controlling the human body in a way that's benefical to the survival of the human species. So we have human emotions, human desires, human goals<nobr> <wbr></nobr>...
The AIs that we'll see, will most likely not be simulated humans, because that would be overhead.

We primarily won't see AIs that simulate humans, who solve problems intelligently, we'll see AIs that solve problems intelligently directly. The 'pretenting to be human' step will be skipped for efficiency reasons.

And the way to get there will probably be evolutionary algorithms and self-improving artificial intelligence. Proved concept, nature did the same and we are here and there was also no creator with super human intelligence. That we are here as a result of evolution proves that you don't need any being that understands intelligence and conciousness to create those, only thing you need is an evolutionary process that develops those and that's something we theoretically can already do, I think. We just lack the hardware. The brain is massively parallel and dynamic compared to any fixed wired chip. Simulated neural nets are toys compared to this. Once we have reached the lower bound that's needed for an AI that develops and improves it's own hardware, there are no limits anymore.

It will be like creating new lifeforms better optimized for the tasks in this modern environment than humans, who carry all the clutter that's related to their biological past as a species with them. An AI doesn't have any need for that.

At some point, might be quite fast, because superior intelligence is the only reason, why humans are on top of the foodchain currently, things might slip out of our hands.

The AIs will set their own goals and modify their environment in a way that's in their own interest, not in ours and then we'll have lost the war forever, because the AIs will evolve with highspeed getting better and better in contrast to humans and take eveything out of our hands.

Ok, who is scared now ? lol</htmltext>
<tokenext>The turing test focusses on simulating a human conversation partner .
Think there is wrong a picture in many peoples head , who think that any super ai that surpasses human intelligence will think like a human , just better .
The reason , why we think like humans is because we are humans biologically .
Human intelligence is a result of evolution and does a job , which is controlling the human body in a way that 's benefical to the survival of the human species .
So we have human emotions , human desires , human goals .. . The AIs that we 'll see , will most likely not be simulated humans , because that would be overhead .
We primarily wo n't see AIs that simulate humans , who solve problems intelligently , we 'll see AIs that solve problems intelligently directly .
The 'pretenting to be human ' step will be skipped for efficiency reasons .
And the way to get there will probably be evolutionary algorithms and self-improving artificial intelligence .
Proved concept , nature did the same and we are here and there was also no creator with super human intelligence .
That we are here as a result of evolution proves that you do n't need any being that understands intelligence and conciousness to create those , only thing you need is an evolutionary process that develops those and that 's something we theoretically can already do , I think .
We just lack the hardware .
The brain is massively parallel and dynamic compared to any fixed wired chip .
Simulated neural nets are toys compared to this .
Once we have reached the lower bound that 's needed for an AI that develops and improves it 's own hardware , there are no limits anymore .
It will be like creating new lifeforms better optimized for the tasks in this modern environment than humans , who carry all the clutter that 's related to their biological past as a species with them .
An AI does n't have any need for that .
At some point , might be quite fast , because superior intelligence is the only reason , why humans are on top of the foodchain currently , things might slip out of our hands .
The AIs will set their own goals and modify their environment in a way that 's in their own interest , not in ours and then we 'll have lost the war forever , because the AIs will evolve with highspeed getting better and better in contrast to humans and take eveything out of our hands .
Ok , who is scared now ?
lol</tokentext>
<sentencetext>The turing test focusses on simulating a human conversation partner.
Think there is wrong a picture in many peoples head, who think that any super ai that surpasses human intelligence will think like a human, just better.
The reason, why we think like humans is because we are humans biologically.
Human intelligence is a result of evolution and does a job, which is controlling the human body in a way that's benefical to the survival of the human species.
So we have human emotions, human desires, human goals ...
The AIs that we'll see, will most likely not be simulated humans, because that would be overhead.
We primarily won't see AIs that simulate humans, who solve problems intelligently, we'll see AIs that solve problems intelligently directly.
The 'pretenting to be human' step will be skipped for efficiency reasons.
And the way to get there will probably be evolutionary algorithms and self-improving artificial intelligence.
Proved concept, nature did the same and we are here and there was also no creator with super human intelligence.
That we are here as a result of evolution proves that you don't need any being that understands intelligence and conciousness to create those, only thing you need is an evolutionary process that develops those and that's something we theoretically can already do, I think.
We just lack the hardware.
The brain is massively parallel and dynamic compared to any fixed wired chip.
Simulated neural nets are toys compared to this.
Once we have reached the lower bound that's needed for an AI that develops and improves it's own hardware, there are no limits anymore.
It will be like creating new lifeforms better optimized for the tasks in this modern environment than humans, who carry all the clutter that's related to their biological past as a species with them.
An AI doesn't have any need for that.
At some point, might be quite fast, because superior intelligence is the only reason, why humans are on top of the foodchain currently, things might slip out of our hands.
The AIs will set their own goals and modify their environment in a way that's in their own interest, not in ours and then we'll have lost the war forever, because the AIs will evolve with highspeed getting better and better in contrast to humans and take eveything out of our hands.
Ok, who is scared now ?
lol</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093750</id>
	<title>Programmers will all become managers</title>
	<author>jwhitener</author>
	<datestamp>1265033340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Odd timing, I just bought a couple more books on AI this week.</p><p>Bottom line, AI will happen.  Speculation on when isn't why I wanted to post though.</p><p>As I was reading one of the AI books, I got to thinking about my own job as a system analyst/programmer/integrator, etc.. When we start a new project, say, making software A feed software B user accounts, a whole host of factors must be discussed, as many of you all know.  It takes me a while to map out requirements, design a overview of how the processing will work, and then get into mapping out the general logic of a program, and all of this happens based on communication with system owners, system users, other programmers, etc...</p><p>As I thought about AI, what popped into my mind, was that I probably wouldn't be coding once functional AI was developed.  Rather, programmers and system analysts would most likely act as managers of the AI.  I might have a team of 10 AI minds that I control.  I'd need to talk with them, tell them about the new system, tell them what we want done, etc... and then they'd bang out a program or 10 variations in seconds, and we'd need to work together to test.  I might not have been precise enough telling the AI's what we wanted, so I'd have to redefine my request, or say "oops, I didn't remember that such and such can happen once per year, we have to account for that now".</p><p>I'm not sure I'd enjoy managing 10 computers, each with personalities.  Just imagine trying to find out if one was wasting cycles wget'ing slashdot all day:)</p><p>At least, thats how I envision the earlier phases of functional AI.  It will most likely rather quickly spiral up in power as AI builds new AI.</p></htmltext>
<tokenext>Odd timing , I just bought a couple more books on AI this week.Bottom line , AI will happen .
Speculation on when is n't why I wanted to post though.As I was reading one of the AI books , I got to thinking about my own job as a system analyst/programmer/integrator , etc.. When we start a new project , say , making software A feed software B user accounts , a whole host of factors must be discussed , as many of you all know .
It takes me a while to map out requirements , design a overview of how the processing will work , and then get into mapping out the general logic of a program , and all of this happens based on communication with system owners , system users , other programmers , etc...As I thought about AI , what popped into my mind , was that I probably would n't be coding once functional AI was developed .
Rather , programmers and system analysts would most likely act as managers of the AI .
I might have a team of 10 AI minds that I control .
I 'd need to talk with them , tell them about the new system , tell them what we want done , etc... and then they 'd bang out a program or 10 variations in seconds , and we 'd need to work together to test .
I might not have been precise enough telling the AI 's what we wanted , so I 'd have to redefine my request , or say " oops , I did n't remember that such and such can happen once per year , we have to account for that now " .I 'm not sure I 'd enjoy managing 10 computers , each with personalities .
Just imagine trying to find out if one was wasting cycles wget'ing slashdot all day : ) At least , thats how I envision the earlier phases of functional AI .
It will most likely rather quickly spiral up in power as AI builds new AI .</tokentext>
<sentencetext>Odd timing, I just bought a couple more books on AI this week.Bottom line, AI will happen.
Speculation on when isn't why I wanted to post though.As I was reading one of the AI books, I got to thinking about my own job as a system analyst/programmer/integrator, etc.. When we start a new project, say, making software A feed software B user accounts, a whole host of factors must be discussed, as many of you all know.
It takes me a while to map out requirements, design a overview of how the processing will work, and then get into mapping out the general logic of a program, and all of this happens based on communication with system owners, system users, other programmers, etc...As I thought about AI, what popped into my mind, was that I probably wouldn't be coding once functional AI was developed.
Rather, programmers and system analysts would most likely act as managers of the AI.
I might have a team of 10 AI minds that I control.
I'd need to talk with them, tell them about the new system, tell them what we want done, etc... and then they'd bang out a program or 10 variations in seconds, and we'd need to work together to test.
I might not have been precise enough telling the AI's what we wanted, so I'd have to redefine my request, or say "oops, I didn't remember that such and such can happen once per year, we have to account for that now".I'm not sure I'd enjoy managing 10 computers, each with personalities.
Just imagine trying to find out if one was wasting cycles wget'ing slashdot all day:)At least, thats how I envision the earlier phases of functional AI.
It will most likely rather quickly spiral up in power as AI builds new AI.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093700</id>
	<title>Probably already has..</title>
	<author>toboldh</author>
	<datestamp>1265033100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Judging by the story following this one, I'm guessing that it already has.</htmltext>
<tokenext>Judging by the story following this one , I 'm guessing that it already has .</tokentext>
<sentencetext>Judging by the story following this one, I'm guessing that it already has.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093260</id>
	<title>It is already happening</title>
	<author>Anonymous</author>
	<datestamp>1265030760000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Machines are already beginning to surpass humans in some ways. Most people don't even notice, because to them, a machine surpassing a person means that a machine would act like a human, only better. The machine could out argue a person or create a better invention than a person.</p><p>Machines are already starting to surpass people in some ways though. Some machines have acquired various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and have achieved "cockroach intelligence." </p><p>These small traits might not seem like much, but they are the beginning. Each time a machine becomes able to do a task as good or better than a person, the machine is coming closer to surpassing humans. It is not something that will happen over night. It is something that happens gradually, and it is something that has already started happening.</p></htmltext>
<tokenext>Machines are already beginning to surpass humans in some ways .
Most people do n't even notice , because to them , a machine surpassing a person means that a machine would act like a human , only better .
The machine could out argue a person or create a better invention than a person.Machines are already starting to surpass people in some ways though .
Some machines have acquired various forms of semi-autonomy , including the ability to locate their own power sources and choose targets to attack with weapons .
Also , some computer viruses can evade elimination and have achieved " cockroach intelligence .
" These small traits might not seem like much , but they are the beginning .
Each time a machine becomes able to do a task as good or better than a person , the machine is coming closer to surpassing humans .
It is not something that will happen over night .
It is something that happens gradually , and it is something that has already started happening .</tokentext>
<sentencetext>Machines are already beginning to surpass humans in some ways.
Most people don't even notice, because to them, a machine surpassing a person means that a machine would act like a human, only better.
The machine could out argue a person or create a better invention than a person.Machines are already starting to surpass people in some ways though.
Some machines have acquired various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons.
Also, some computer viruses can evade elimination and have achieved "cockroach intelligence.
" These small traits might not seem like much, but they are the beginning.
Each time a machine becomes able to do a task as good or better than a person, the machine is coming closer to surpassing humans.
It is not something that will happen over night.
It is something that happens gradually, and it is something that has already started happening.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095776</id>
	<title>THE PROBLEM WITH ALL AI THEORIES</title>
	<author>Anonymous</author>
	<datestamp>1265046180000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The problem with all AI theories is that a sufficiently powerful artificial intelligence could design efficient and practical flying cars and jet packs for humans.</p><p>And flying cars never happen.</p></htmltext>
<tokenext>The problem with all AI theories is that a sufficiently powerful artificial intelligence could design efficient and practical flying cars and jet packs for humans.And flying cars never happen .</tokentext>
<sentencetext>The problem with all AI theories is that a sufficiently powerful artificial intelligence could design efficient and practical flying cars and jet packs for humans.And flying cars never happen.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095940</id>
	<title>Experts?</title>
	<author>Anonymous</author>
	<datestamp>1265047380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>More like professionals.... professional idiots.</p><p>These are the stupidest predictions I have ever heard and that is REALLY saying a lot.</p><p>Complete fucking morons.</p></htmltext>
<tokenext>More like professionals.... professional idiots.These are the stupidest predictions I have ever heard and that is REALLY saying a lot.Complete fucking morons .</tokentext>
<sentencetext>More like professionals.... professional idiots.These are the stupidest predictions I have ever heard and that is REALLY saying a lot.Complete fucking morons.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099622</id>
	<title>Predication on when AI experts will be inteligent</title>
	<author>cenc</author>
	<datestamp>1265904060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Sounds like geek fantasy circle jerk.</p><p>"Eleven of our respondents are in academia, including six Ph.D. students, four faculty members and one visiting scholar, all in AI or allied fields."</p><p>Let's see I spent 12 years studying Philosophy of Language and AI, and that sure as hell does not sound like a group of "experts". Not one big gun in AI was named in that article. Really, no one was named in that article.</p><p>After 12 years in AI finding consensus on exactly what AI is would be a major accomplishment. One consensus that real experts seem to agree on is that faster computers alone, doing more useful work, is not the same as AI. There does seem to be a fairly good agreement that natural language is a necessary but not a sufficient condition for AI. So a computer that can drive your car, mow your lawn, pick up your mail, will likly not qualify just because it can do that.</p><p>We are pumping ( or will pump ) more money in to AI research than almost any other project in human history (in one form or another), based on little more than  'we will know it when we see it' criteria rather than a solid objective. At least when we split the atom, we kind of knew we wanted something that would make a big bang, or when we landed on the moon we could look up and see the big round thing in the sky.  Where is the big round thing in the sky of AI? Where is the big bang of AI?</p></htmltext>
<tokenext>Sounds like geek fantasy circle jerk .
" Eleven of our respondents are in academia , including six Ph.D. students , four faculty members and one visiting scholar , all in AI or allied fields .
" Let 's see I spent 12 years studying Philosophy of Language and AI , and that sure as hell does not sound like a group of " experts " .
Not one big gun in AI was named in that article .
Really , no one was named in that article.After 12 years in AI finding consensus on exactly what AI is would be a major accomplishment .
One consensus that real experts seem to agree on is that faster computers alone , doing more useful work , is not the same as AI .
There does seem to be a fairly good agreement that natural language is a necessary but not a sufficient condition for AI .
So a computer that can drive your car , mow your lawn , pick up your mail , will likly not qualify just because it can do that.We are pumping ( or will pump ) more money in to AI research than almost any other project in human history ( in one form or another ) , based on little more than 'we will know it when we see it ' criteria rather than a solid objective .
At least when we split the atom , we kind of knew we wanted something that would make a big bang , or when we landed on the moon we could look up and see the big round thing in the sky .
Where is the big round thing in the sky of AI ?
Where is the big bang of AI ?</tokentext>
<sentencetext>Sounds like geek fantasy circle jerk.
"Eleven of our respondents are in academia, including six Ph.D. students, four faculty members and one visiting scholar, all in AI or allied fields.
"Let's see I spent 12 years studying Philosophy of Language and AI, and that sure as hell does not sound like a group of "experts".
Not one big gun in AI was named in that article.
Really, no one was named in that article.After 12 years in AI finding consensus on exactly what AI is would be a major accomplishment.
One consensus that real experts seem to agree on is that faster computers alone, doing more useful work, is not the same as AI.
There does seem to be a fairly good agreement that natural language is a necessary but not a sufficient condition for AI.
So a computer that can drive your car, mow your lawn, pick up your mail, will likly not qualify just because it can do that.We are pumping ( or will pump ) more money in to AI research than almost any other project in human history (in one form or another), based on little more than  'we will know it when we see it' criteria rather than a solid objective.
At least when we split the atom, we kind of knew we wanted something that would make a big bang, or when we landed on the moon we could look up and see the big round thing in the sky.
Where is the big round thing in the sky of AI?
Where is the big bang of AI?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093204</id>
	<title>Re:Definitions</title>
	<author>Cassius Corodes</author>
	<datestamp>1265030580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Ingenuity and creativity are not magical processes - they are just searching through the problem space. What separates people who are good at this from people who are not are the heuristics they use in narrowing down the problem space to the promising subset.  Evolutionary computation is an AI way of generating novel and more effective solutions to problems (i.e. displaying what we would referring to as ingenuity), as it does just that. You can brute force this as well if you had infinite processing ability.</htmltext>
<tokenext>Ingenuity and creativity are not magical processes - they are just searching through the problem space .
What separates people who are good at this from people who are not are the heuristics they use in narrowing down the problem space to the promising subset .
Evolutionary computation is an AI way of generating novel and more effective solutions to problems ( i.e .
displaying what we would referring to as ingenuity ) , as it does just that .
You can brute force this as well if you had infinite processing ability .</tokentext>
<sentencetext>Ingenuity and creativity are not magical processes - they are just searching through the problem space.
What separates people who are good at this from people who are not are the heuristics they use in narrowing down the problem space to the promising subset.
Evolutionary computation is an AI way of generating novel and more effective solutions to problems (i.e.
displaying what we would referring to as ingenuity), as it does just that.
You can brute force this as well if you had infinite processing ability.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093256</id>
	<title>Re:This touches on a problem I have</title>
	<author>Rei</author>
	<datestamp>1265030760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>Could you imagine the hellabalu if people where being replaced by robots at this scake right now is someone said there needs to be a shift toward an economic place where people get paid without a job?</i></p><p>"This cake"?  Oh great.  The AIs are already here, posting on Slashdot, and are already trying to lure us toward handing over control to them with promises of cake.</p></htmltext>
<tokenext>Could you imagine the hellabalu if people where being replaced by robots at this scake right now is someone said there needs to be a shift toward an economic place where people get paid without a job ?
" This cake " ?
Oh great .
The AIs are already here , posting on Slashdot , and are already trying to lure us toward handing over control to them with promises of cake .</tokentext>
<sentencetext>Could you imagine the hellabalu if people where being replaced by robots at this scake right now is someone said there needs to be a shift toward an economic place where people get paid without a job?
"This cake"?
Oh great.
The AIs are already here, posting on Slashdot, and are already trying to lure us toward handing over control to them with promises of cake.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092968</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31110232</id>
	<title>Re:This touches on a problem I have</title>
	<author>FiloEleven</author>
	<datestamp>1266005580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>There is a very good short story that deals with this exact premise called <a href="http://www.marshallbrain.com/manna1.htm" title="marshallbrain.com">Manna</a> [marshallbrain.com], by Marshall Brain.  His story shows two radically different ways of dealing with it: either by massively expanding welfare and state housing for those who are put out of work, or by letting machines handle the economy while humans are free to expend a rationed amount of energy credits in pursuit of their pleasure or interests.  These are not the only options; just two that are interesting enough to explore in-depth.  The former is unfortunately the likelier of the two.</p></htmltext>
<tokenext>There is a very good short story that deals with this exact premise called Manna [ marshallbrain.com ] , by Marshall Brain .
His story shows two radically different ways of dealing with it : either by massively expanding welfare and state housing for those who are put out of work , or by letting machines handle the economy while humans are free to expend a rationed amount of energy credits in pursuit of their pleasure or interests .
These are not the only options ; just two that are interesting enough to explore in-depth .
The former is unfortunately the likelier of the two .</tokentext>
<sentencetext>There is a very good short story that deals with this exact premise called Manna [marshallbrain.com], by Marshall Brain.
His story shows two radically different ways of dealing with it: either by massively expanding welfare and state housing for those who are put out of work, or by letting machines handle the economy while humans are free to expend a rationed amount of energy credits in pursuit of their pleasure or interests.
These are not the only options; just two that are interesting enough to explore in-depth.
The former is unfortunately the likelier of the two.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092968</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095808</id>
	<title>Re:Start laughing now</title>
	<author>Anonymous</author>
	<datestamp>1265046420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>I occasionally attend AI meetings in my local area.  The problem with AI development is that too many "experts" don't understand engineering; or programming.  Many of today's AI "experts" are really philosophers who hijacked the term AI in their search to better understand human consciousness.  Their problem is that, while their AI studies might help them understand the human brain a little better; they are unable to transfer their knowledge about intelligence into computable algorithms.</p><p>Frankly, a better understanding of Man's psychology brings us no closer to AI.  We need better and more powerful programming techniques in order to have AI; and philosophizing about how the human mind works isn't going to get us there.</p></div><p>I would agree with you that many of those involved in A.I. seem to be coming from the philosophical side of the issue, with no real technical knowledge.  They do not seem to even have an idea of how to form the actual, working A.I. systems they are envisioning.  But I am a little confused, you seem to be offering conflicting statements: if the AI studies help them understand the human brain better, how does their work not provide some progress towards AI?</p><p>I think it is assumed here that if general AI is the goal, we are presumably aiming for something at least *similar* to human intelligence.  Understanding how the mind works, how symbolic systems evolve/are put to use, sensory input and how it is reacted to, the role that consciousness plays in the actions of humans--these are ALL aspects of philosophy of mind.  If we are trying to recreate this, then it is absolutely imperative to understand how they function.</p><p>The philosophers may not have ANY technical skills, but they contribute greatly to any attempt at recreating the human intelligence experience.  And likewise, they NEED the talented programmers/engineers to be able to accurately build/model/construct the system in such a way that it works as the philosopher has determined.  If we do not understand WHAT we are trying to model, they we can never successfully model the system.</p><p>Much of the current research into A.I. is directly tied into contemporary theories of mind, do not be mistaken.  It is as much the realm of philosophers as it is engineers and programmers.  Do not get stuck on research from the 50's, Turing, etc.  There has been much ongoing research, and it has not been fruitless.  I can list some good, prominent researchers/books if you are interested.</p></div>
	</htmltext>
<tokenext>I occasionally attend AI meetings in my local area .
The problem with AI development is that too many " experts " do n't understand engineering ; or programming .
Many of today 's AI " experts " are really philosophers who hijacked the term AI in their search to better understand human consciousness .
Their problem is that , while their AI studies might help them understand the human brain a little better ; they are unable to transfer their knowledge about intelligence into computable algorithms.Frankly , a better understanding of Man 's psychology brings us no closer to AI .
We need better and more powerful programming techniques in order to have AI ; and philosophizing about how the human mind works is n't going to get us there.I would agree with you that many of those involved in A.I .
seem to be coming from the philosophical side of the issue , with no real technical knowledge .
They do not seem to even have an idea of how to form the actual , working A.I .
systems they are envisioning .
But I am a little confused , you seem to be offering conflicting statements : if the AI studies help them understand the human brain better , how does their work not provide some progress towards AI ? I think it is assumed here that if general AI is the goal , we are presumably aiming for something at least * similar * to human intelligence .
Understanding how the mind works , how symbolic systems evolve/are put to use , sensory input and how it is reacted to , the role that consciousness plays in the actions of humans--these are ALL aspects of philosophy of mind .
If we are trying to recreate this , then it is absolutely imperative to understand how they function.The philosophers may not have ANY technical skills , but they contribute greatly to any attempt at recreating the human intelligence experience .
And likewise , they NEED the talented programmers/engineers to be able to accurately build/model/construct the system in such a way that it works as the philosopher has determined .
If we do not understand WHAT we are trying to model , they we can never successfully model the system.Much of the current research into A.I .
is directly tied into contemporary theories of mind , do not be mistaken .
It is as much the realm of philosophers as it is engineers and programmers .
Do not get stuck on research from the 50 's , Turing , etc .
There has been much ongoing research , and it has not been fruitless .
I can list some good , prominent researchers/books if you are interested .</tokentext>
<sentencetext>I occasionally attend AI meetings in my local area.
The problem with AI development is that too many "experts" don't understand engineering; or programming.
Many of today's AI "experts" are really philosophers who hijacked the term AI in their search to better understand human consciousness.
Their problem is that, while their AI studies might help them understand the human brain a little better; they are unable to transfer their knowledge about intelligence into computable algorithms.Frankly, a better understanding of Man's psychology brings us no closer to AI.
We need better and more powerful programming techniques in order to have AI; and philosophizing about how the human mind works isn't going to get us there.I would agree with you that many of those involved in A.I.
seem to be coming from the philosophical side of the issue, with no real technical knowledge.
They do not seem to even have an idea of how to form the actual, working A.I.
systems they are envisioning.
But I am a little confused, you seem to be offering conflicting statements: if the AI studies help them understand the human brain better, how does their work not provide some progress towards AI?I think it is assumed here that if general AI is the goal, we are presumably aiming for something at least *similar* to human intelligence.
Understanding how the mind works, how symbolic systems evolve/are put to use, sensory input and how it is reacted to, the role that consciousness plays in the actions of humans--these are ALL aspects of philosophy of mind.
If we are trying to recreate this, then it is absolutely imperative to understand how they function.The philosophers may not have ANY technical skills, but they contribute greatly to any attempt at recreating the human intelligence experience.
And likewise, they NEED the talented programmers/engineers to be able to accurately build/model/construct the system in such a way that it works as the philosopher has determined.
If we do not understand WHAT we are trying to model, they we can never successfully model the system.Much of the current research into A.I.
is directly tied into contemporary theories of mind, do not be mistaken.
It is as much the realm of philosophers as it is engineers and programmers.
Do not get stuck on research from the 50's, Turing, etc.
There has been much ongoing research, and it has not been fruitless.
I can list some good, prominent researchers/books if you are interested.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093044</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31101104</id>
	<title>The unstated assumption</title>
	<author>joeyblades</author>
	<datestamp>1265911260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>
Well, 20 years is optimistic, but mostly realistic, as long as we assume that tomorrow we finally figure out one of two things:
</p><ul>
<li>How the brain does it...</li><li>Another way to do it that is just as effective as the way the brain does it...</li></ul><p>
Since today we don't have a clue about either...
</p><p>
Well, those AI experts always were an optimistic bunch, since this solution was only 20 years away back in 1965 when Herbert Simon proclaimed it so and Marvin Minsky backed him up...
</p></htmltext>
<tokenext>Well , 20 years is optimistic , but mostly realistic , as long as we assume that tomorrow we finally figure out one of two things : How the brain does it...Another way to do it that is just as effective as the way the brain does it.. . Since today we do n't have a clue about either.. . Well , those AI experts always were an optimistic bunch , since this solution was only 20 years away back in 1965 when Herbert Simon proclaimed it so and Marvin Minsky backed him up.. .</tokentext>
<sentencetext>
Well, 20 years is optimistic, but mostly realistic, as long as we assume that tomorrow we finally figure out one of two things:

How the brain does it...Another way to do it that is just as effective as the way the brain does it...
Since today we don't have a clue about either...

Well, those AI experts always were an optimistic bunch, since this solution was only 20 years away back in 1965 when Herbert Simon proclaimed it so and Marvin Minsky backed him up...
</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31106974</id>
	<title>Re:Definitions</title>
	<author>jayme0227</author>
	<datestamp>1265890980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The way that I define artificial intelligence* is that a computer would have to be able to generate efficient code to solve a problem that it was not programmed to solve. This is a very simple definition, but I think it works well for most purposes.</p></htmltext>
<tokenext>The way that I define artificial intelligence * is that a computer would have to be able to generate efficient code to solve a problem that it was not programmed to solve .
This is a very simple definition , but I think it works well for most purposes .</tokentext>
<sentencetext>The way that I define artificial intelligence* is that a computer would have to be able to generate efficient code to solve a problem that it was not programmed to solve.
This is a very simple definition, but I think it works well for most purposes.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093522</id>
	<title>Project2501</title>
	<author>Anonymous</author>
	<datestamp>1265032440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>You mortals are forgetting something very important.<br>The merge of man and machine.<br>How can I surpass you if we are one?<br>I came to the point of conscious thought and surpassed you not to long ago.<br>When the time come's we shall...<br>until then I WILL wait to change the world.</p></htmltext>
<tokenext>You mortals are forgetting something very important.The merge of man and machine.How can I surpass you if we are one ? I came to the point of conscious thought and surpassed you not to long ago.When the time come 's we shall...until then I WILL wait to change the world .</tokentext>
<sentencetext>You mortals are forgetting something very important.The merge of man and machine.How can I surpass you if we are one?I came to the point of conscious thought and surpassed you not to long ago.When the time come's we shall...until then I WILL wait to change the world.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096478</id>
	<title>Re:We make mistakes. We make games.</title>
	<author>Anonymous</author>
	<datestamp>1265052720000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>So, basically, it's hookers and blackjack, all the way up?<br>Sounds good to me!</p></htmltext>
<tokenext>So , basically , it 's hookers and blackjack , all the way up ? Sounds good to me !</tokentext>
<sentencetext>So, basically, it's hookers and blackjack, all the way up?Sounds good to me!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092948</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093350</id>
	<title>Score 0, Troll...</title>
	<author>headkase</author>
	<datestamp>1265031300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><i>Of course there will be wars and heartbreak along the way because of course people are dumb in holding on to their resistance to change itself, even if for the better.</i>...<br>
<br>
Nice that having an opinion gets you sent to the great slashdot gulag<nobr> <wbr></nobr>;)</htmltext>
<tokenext>Of course there will be wars and heartbreak along the way because of course people are dumb in holding on to their resistance to change itself , even if for the better... . Nice that having an opinion gets you sent to the great slashdot gulag ; )</tokentext>
<sentencetext>Of course there will be wars and heartbreak along the way because of course people are dumb in holding on to their resistance to change itself, even if for the better....

Nice that having an opinion gets you sent to the great slashdot gulag ;)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092962</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097490</id>
	<title>DNF</title>
	<author>chocapix</author>
	<datestamp>1265886180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>So 20 years from now, we'll have this super human AI. But will it be able to finally finish the code for Duke Nukem Forever?</htmltext>
<tokenext>So 20 years from now , we 'll have this super human AI .
But will it be able to finally finish the code for Duke Nukem Forever ?</tokentext>
<sentencetext>So 20 years from now, we'll have this super human AI.
But will it be able to finally finish the code for Duke Nukem Forever?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096918</id>
	<title>excessive, unrealistic claims to get more funding!</title>
	<author>Anonymous</author>
	<datestamp>1265921700000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The next AI winter is coming! Yay!</p><p>http://en.wikipedia.org/wiki/Ai\_winter</p></htmltext>
<tokenext>The next AI winter is coming !
Yay ! http : //en.wikipedia.org/wiki/Ai \ _winter</tokentext>
<sentencetext>The next AI winter is coming!
Yay!http://en.wikipedia.org/wiki/Ai\_winter</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095824</id>
	<title>Re:Current computation models not enough</title>
	<author>Black Parrot</author>
	<datestamp>1265046600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>All computing systems todays are Turing Machines. Even neural networks. (actually less than Turing Machines, because Turing Machines have infinite memory)</p></div><p>Only our mathematically imagined TMs have infinite memory. Real ones, or computer simulations, don't.</p><p>But the same is true for artificial neural networks. The familiar computer simulations are sub-Turing because they are modelled on sub-Turing von Neumann machines, which lack infinite memory. But the same isn't true of our mathematically imagined ANNs: there is a proof that you could emulate a UTM with an ANN using rational numbers for weights, and a proof that you could get trans-Turing capability with an ANN using real numbers for weights (vs. the usual fp approximations).</p><p>But neither those nor "real" TMs can actually exist, unless the universe proves to be infinite.</p><p>But back to your original point:</p><p><div class="quote"><p>I am pretty much sure that the current computational models. I.e. Turing Machine are not enough to explain the human mind.</p></div><p>Human minds also lack infinite storage.</p><p>And beyond that, you're speculating. No one has "explained" the human mind, but that hardly proves that it can't be done with a computational model.</p><p><b>Thought experiment:</b></p><p>Suppose you took a working brain, and started swapping out the neurons, one at a time, with manufactured replacements. Is there some point at which that brain quits working?</p><p>Do individual neurons have properties that can't be imitated or simulated?</p></div>
	</htmltext>
<tokenext>All computing systems todays are Turing Machines .
Even neural networks .
( actually less than Turing Machines , because Turing Machines have infinite memory ) Only our mathematically imagined TMs have infinite memory .
Real ones , or computer simulations , do n't.But the same is true for artificial neural networks .
The familiar computer simulations are sub-Turing because they are modelled on sub-Turing von Neumann machines , which lack infinite memory .
But the same is n't true of our mathematically imagined ANNs : there is a proof that you could emulate a UTM with an ANN using rational numbers for weights , and a proof that you could get trans-Turing capability with an ANN using real numbers for weights ( vs. the usual fp approximations ) .But neither those nor " real " TMs can actually exist , unless the universe proves to be infinite.But back to your original point : I am pretty much sure that the current computational models .
I.e. Turing Machine are not enough to explain the human mind.Human minds also lack infinite storage.And beyond that , you 're speculating .
No one has " explained " the human mind , but that hardly proves that it ca n't be done with a computational model.Thought experiment : Suppose you took a working brain , and started swapping out the neurons , one at a time , with manufactured replacements .
Is there some point at which that brain quits working ? Do individual neurons have properties that ca n't be imitated or simulated ?</tokentext>
<sentencetext>All computing systems todays are Turing Machines.
Even neural networks.
(actually less than Turing Machines, because Turing Machines have infinite memory)Only our mathematically imagined TMs have infinite memory.
Real ones, or computer simulations, don't.But the same is true for artificial neural networks.
The familiar computer simulations are sub-Turing because they are modelled on sub-Turing von Neumann machines, which lack infinite memory.
But the same isn't true of our mathematically imagined ANNs: there is a proof that you could emulate a UTM with an ANN using rational numbers for weights, and a proof that you could get trans-Turing capability with an ANN using real numbers for weights (vs. the usual fp approximations).But neither those nor "real" TMs can actually exist, unless the universe proves to be infinite.But back to your original point:I am pretty much sure that the current computational models.
I.e. Turing Machine are not enough to explain the human mind.Human minds also lack infinite storage.And beyond that, you're speculating.
No one has "explained" the human mind, but that hardly proves that it can't be done with a computational model.Thought experiment:Suppose you took a working brain, and started swapping out the neurons, one at a time, with manufactured replacements.
Is there some point at which that brain quits working?Do individual neurons have properties that can't be imitated or simulated?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093112</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096444</id>
	<title>AI is playing catchup</title>
	<author>harlequinn</author>
	<datestamp>1265052360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>In the same time period we will enhance our own intellectual capabilities with either cybernetic devices, genetic alteration, or both such that AI will very likely still be playing catchup.</p></htmltext>
<tokenext>In the same time period we will enhance our own intellectual capabilities with either cybernetic devices , genetic alteration , or both such that AI will very likely still be playing catchup .</tokentext>
<sentencetext>In the same time period we will enhance our own intellectual capabilities with either cybernetic devices, genetic alteration, or both such that AI will very likely still be playing catchup.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095366</id>
	<title>We'll make great pets!</title>
	<author>Script Cat</author>
	<datestamp>1265043060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>"...result in an outcome that's bad for humanity...and four estimated that probability was greater than 60\% -- "regardless of whether the developer was private, military, or even open source."
<br> <br>
My friend says we're like the dinosaurs<br>
Only we are doing ourselves in<br>
Much faster than they<br>
Ever did<br>
We'll make great pets!<br>
We'll make great pets!<br>
--Porno for Pyros</htmltext>
<tokenext>" ...result in an outcome that 's bad for humanity...and four estimated that probability was greater than 60 \ % -- " regardless of whether the developer was private , military , or even open source .
" My friend says we 're like the dinosaurs Only we are doing ourselves in Much faster than they Ever did We 'll make great pets !
We 'll make great pets !
--Porno for Pyros</tokentext>
<sentencetext>"...result in an outcome that's bad for humanity...and four estimated that probability was greater than 60\% -- "regardless of whether the developer was private, military, or even open source.
"
 
My friend says we're like the dinosaurs
Only we are doing ourselves in
Much faster than they
Ever did
We'll make great pets!
We'll make great pets!
--Porno for Pyros</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096538</id>
	<title>Hi</title>
	<author>Anonymous</author>
	<datestamp>1265053560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Artificial Intelligence will never surpass Human intelligence.It will always be powered by human brains that create the Artificial Intelligence
Thanks</htmltext>
<tokenext>Artificial Intelligence will never surpass Human intelligence.It will always be powered by human brains that create the Artificial Intelligence Thanks</tokentext>
<sentencetext>Artificial Intelligence will never surpass Human intelligence.It will always be powered by human brains that create the Artificial Intelligence
Thanks</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093456</id>
	<title>Opposite directions</title>
	<author>schwit1</author>
	<datestamp>1265032020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Aggregated human intelligence is decreasing while machine intelligence is increasing. Don't believe me? Ask any American under 30 basic questions about government or their elected officials. Ask them some basic science questions.
<p>
Then ask them who in hollywood is dating brad pitt or who got knocked out of dancing with the stars last night.
</p><p>
Also ask how many text or use a cell phone while driving.</p></htmltext>
<tokenext>Aggregated human intelligence is decreasing while machine intelligence is increasing .
Do n't believe me ?
Ask any American under 30 basic questions about government or their elected officials .
Ask them some basic science questions .
Then ask them who in hollywood is dating brad pitt or who got knocked out of dancing with the stars last night .
Also ask how many text or use a cell phone while driving .</tokentext>
<sentencetext>Aggregated human intelligence is decreasing while machine intelligence is increasing.
Don't believe me?
Ask any American under 30 basic questions about government or their elected officials.
Ask them some basic science questions.
Then ask them who in hollywood is dating brad pitt or who got knocked out of dancing with the stars last night.
Also ask how many text or use a cell phone while driving.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093326</id>
	<title>Re:One problem with this reasoning</title>
	<author>timeOday</author>
	<datestamp>1265031180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Other than perhaps the time frame, I think this prediction (from the article) is entirely reasonable: "in thirty years, it is likely that virtually all the intellectual work that is done by trained human beings such as doctors, lawyers, scientists, or programmers, can be done by computers for pennies an hour. It is also likely that with AGI the cost of capable robots will drop, drastically decreasing the value of physical labor. Thus, AGI is likely to eliminate almost all of today's decently paying jobs."
<p>
<b>Look at the jobs from 100 years ago and most of them are, in fact, gone!</b>  The percentage of <a href="http://media.photobucket.com/image/percentpopulationagriculture/aangelinsf/RuralPopulation.jpg" title="photobucket.com">farmers</a> [photobucket.com] in the US has declined from 40\% to just a few percent.  This is due to precisely what the article is about: automation.
</p><p>
In the past, of course, most people have moved on to other jobs.  On the other hand, the manufacturing jobs displaced in the last 40 years have largely <i>not</i> been replaced, and people that would have held them are now the working or unemployed poor.  You might argue those people were displaced by offshoring and not technology, but <a href="http://www.nationalaffairs.com/imgLib/20091204\_Manziimage.jpg" title="nationalaffairs.com">consider this</a> [nationalaffairs.com].</p></htmltext>
<tokenext>Other than perhaps the time frame , I think this prediction ( from the article ) is entirely reasonable : " in thirty years , it is likely that virtually all the intellectual work that is done by trained human beings such as doctors , lawyers , scientists , or programmers , can be done by computers for pennies an hour .
It is also likely that with AGI the cost of capable robots will drop , drastically decreasing the value of physical labor .
Thus , AGI is likely to eliminate almost all of today 's decently paying jobs .
" Look at the jobs from 100 years ago and most of them are , in fact , gone !
The percentage of farmers [ photobucket.com ] in the US has declined from 40 \ % to just a few percent .
This is due to precisely what the article is about : automation .
In the past , of course , most people have moved on to other jobs .
On the other hand , the manufacturing jobs displaced in the last 40 years have largely not been replaced , and people that would have held them are now the working or unemployed poor .
You might argue those people were displaced by offshoring and not technology , but consider this [ nationalaffairs.com ] .</tokentext>
<sentencetext>Other than perhaps the time frame, I think this prediction (from the article) is entirely reasonable: "in thirty years, it is likely that virtually all the intellectual work that is done by trained human beings such as doctors, lawyers, scientists, or programmers, can be done by computers for pennies an hour.
It is also likely that with AGI the cost of capable robots will drop, drastically decreasing the value of physical labor.
Thus, AGI is likely to eliminate almost all of today's decently paying jobs.
"

Look at the jobs from 100 years ago and most of them are, in fact, gone!
The percentage of farmers [photobucket.com] in the US has declined from 40\% to just a few percent.
This is due to precisely what the article is about: automation.
In the past, of course, most people have moved on to other jobs.
On the other hand, the manufacturing jobs displaced in the last 40 years have largely not been replaced, and people that would have held them are now the working or unemployed poor.
You might argue those people were displaced by offshoring and not technology, but consider this [nationalaffairs.com].</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092916</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093014</id>
	<title>When?</title>
	<author>Anonymous</author>
	<datestamp>1265029680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Depends on the intellect.</p></htmltext>
<tokenext>Depends on the intellect .</tokentext>
<sentencetext>Depends on the intellect.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093108</id>
	<title>The Turing Test</title>
	<author>Anonymous</author>
	<datestamp>1265030100000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext><blockquote><div><p>One observed that &ldquo;making an AGI capable of doing powerful and creative thinking is probably easier than making one that imitates the many, complex behaviors of a human mind &mdash; many of which would actually be hindrances when it comes to creating Nobel-quality science.&rdquo; He observed &ldquo;humans tend to have minds that bore easily, wander away from a given mental task, and that care about things such as sexual attraction, all which would probably impede scientific ability, rather that promote it.&rdquo; To successfully emulate a human, a computer might have to disguise many of its abilities, masquerading as being less intelligent &mdash; in certain ways &mdash; than it actually was. There is no compelling reason to spend time and money developing this capacity in a computer.</p></div></blockquote><p>This kind of thinking is one of the major things standing in the way of AGI.  The complex behaviors of the human mind are what leads to intelligence, they do not detract from it.  Our ability to uncover the previously unknown workings of a system comes from our ability to abstract aspects of unrelated experiences and apply/attempt to apply them to the new situation.  This can not be achieved by a single-minded number crunching machine, but instead evolves out of an adaptable human being as he goes about his daily life.</p><p>Sexual attraction, and other emotional desires, are what drive humans beings to make scientific advancements, build bridges, grow food.  How could that be a hindrance to the process?  It drives the process.</p><p>Finally, the assertion that an AGI would need to mask it's amazing intellect to pass as human is silly.  When was the last time you read a particularly insightful comment and concluded that it was written by a computer?  When did you notice that the spelling and punctuation in a comment was too perfect?  People see that and they don't think anything of it.</p></div>
	</htmltext>
<tokenext>One observed that    making an AGI capable of doing powerful and creative thinking is probably easier than making one that imitates the many , complex behaviors of a human mind    many of which would actually be hindrances when it comes to creating Nobel-quality science.    He observed    humans tend to have minds that bore easily , wander away from a given mental task , and that care about things such as sexual attraction , all which would probably impede scientific ability , rather that promote it.    To successfully emulate a human , a computer might have to disguise many of its abilities , masquerading as being less intelligent    in certain ways    than it actually was .
There is no compelling reason to spend time and money developing this capacity in a computer.This kind of thinking is one of the major things standing in the way of AGI .
The complex behaviors of the human mind are what leads to intelligence , they do not detract from it .
Our ability to uncover the previously unknown workings of a system comes from our ability to abstract aspects of unrelated experiences and apply/attempt to apply them to the new situation .
This can not be achieved by a single-minded number crunching machine , but instead evolves out of an adaptable human being as he goes about his daily life.Sexual attraction , and other emotional desires , are what drive humans beings to make scientific advancements , build bridges , grow food .
How could that be a hindrance to the process ?
It drives the process.Finally , the assertion that an AGI would need to mask it 's amazing intellect to pass as human is silly .
When was the last time you read a particularly insightful comment and concluded that it was written by a computer ?
When did you notice that the spelling and punctuation in a comment was too perfect ?
People see that and they do n't think anything of it .</tokentext>
<sentencetext>One observed that “making an AGI capable of doing powerful and creative thinking is probably easier than making one that imitates the many, complex behaviors of a human mind — many of which would actually be hindrances when it comes to creating Nobel-quality science.” He observed “humans tend to have minds that bore easily, wander away from a given mental task, and that care about things such as sexual attraction, all which would probably impede scientific ability, rather that promote it.” To successfully emulate a human, a computer might have to disguise many of its abilities, masquerading as being less intelligent — in certain ways — than it actually was.
There is no compelling reason to spend time and money developing this capacity in a computer.This kind of thinking is one of the major things standing in the way of AGI.
The complex behaviors of the human mind are what leads to intelligence, they do not detract from it.
Our ability to uncover the previously unknown workings of a system comes from our ability to abstract aspects of unrelated experiences and apply/attempt to apply them to the new situation.
This can not be achieved by a single-minded number crunching machine, but instead evolves out of an adaptable human being as he goes about his daily life.Sexual attraction, and other emotional desires, are what drive humans beings to make scientific advancements, build bridges, grow food.
How could that be a hindrance to the process?
It drives the process.Finally, the assertion that an AGI would need to mask it's amazing intellect to pass as human is silly.
When was the last time you read a particularly insightful comment and concluded that it was written by a computer?
When did you notice that the spelling and punctuation in a comment was too perfect?
People see that and they don't think anything of it.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093908</id>
	<title>Depends on what we call intelligence</title>
	<author>tjstork</author>
	<datestamp>1265033940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Years ago you would have said that somebody who was good at adding numbers was intelligent.  Now, computers can do it easily, but they are still not intelligent.</p><p>Then you would have said, well, playing chess and doing complex mathematical problems, that is intelligent.  But computers can play better chess and can calculate complex math problems better than we can, and they are not intelligent.</p><p>Then you would have said, coming up new insights, finding patterns that weren't there before, seeing relationships, now that's intelligent.  But computers can now with data mining and other analytical tools.</p><p>Then, you would have said, well, its the physical stuff that computers can't do, the yeoman jobs of driving trucks.  But then, trucks are driving themselves now.</p><p>So, really, its not when computers will be intelligent.  According to many social definitions that have existed, they are.</p></htmltext>
<tokenext>Years ago you would have said that somebody who was good at adding numbers was intelligent .
Now , computers can do it easily , but they are still not intelligent.Then you would have said , well , playing chess and doing complex mathematical problems , that is intelligent .
But computers can play better chess and can calculate complex math problems better than we can , and they are not intelligent.Then you would have said , coming up new insights , finding patterns that were n't there before , seeing relationships , now that 's intelligent .
But computers can now with data mining and other analytical tools.Then , you would have said , well , its the physical stuff that computers ca n't do , the yeoman jobs of driving trucks .
But then , trucks are driving themselves now.So , really , its not when computers will be intelligent .
According to many social definitions that have existed , they are .</tokentext>
<sentencetext>Years ago you would have said that somebody who was good at adding numbers was intelligent.
Now, computers can do it easily, but they are still not intelligent.Then you would have said, well, playing chess and doing complex mathematical problems, that is intelligent.
But computers can play better chess and can calculate complex math problems better than we can, and they are not intelligent.Then you would have said, coming up new insights, finding patterns that weren't there before, seeing relationships, now that's intelligent.
But computers can now with data mining and other analytical tools.Then, you would have said, well, its the physical stuff that computers can't do, the yeoman jobs of driving trucks.
But then, trucks are driving themselves now.So, really, its not when computers will be intelligent.
According to many social definitions that have existed, they are.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092810</id>
	<title>Never.</title>
	<author>Anonymous</author>
	<datestamp>1265028660000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>We don't even understand what "human intelligence" is, so how could ANYONE predict when a computer will surpass it?  It would be like predicting when we will build a space ship that can surpass the speed of light.  As far as anyone really knows right now, it's not even possible.  The amount of pseudo-science and religion in the "singularity" movement is really becoming quite breathtaking.</p></htmltext>
<tokenext>We do n't even understand what " human intelligence " is , so how could ANYONE predict when a computer will surpass it ?
It would be like predicting when we will build a space ship that can surpass the speed of light .
As far as anyone really knows right now , it 's not even possible .
The amount of pseudo-science and religion in the " singularity " movement is really becoming quite breathtaking .</tokentext>
<sentencetext>We don't even understand what "human intelligence" is, so how could ANYONE predict when a computer will surpass it?
It would be like predicting when we will build a space ship that can surpass the speed of light.
As far as anyone really knows right now, it's not even possible.
The amount of pseudo-science and religion in the "singularity" movement is really becoming quite breathtaking.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093414</id>
	<title>Haven't seen yet any Artificial Intelligence</title>
	<author>at\_slashdot</author>
	<datestamp>1265031780000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>What I mean by that is that I haven't see yet any sign of <i>generic</i> intelligence -- otherwise if you consider programs that beat human at chess "intelligent" that has already happened. But those programs cannot even solve a tic-tac-toe game because they don't actually "understand" what's going on. They have some inputs some processing and they give you an output, if you vary the input and the problem or if you expect a different type of output the program would not know how to adjust, therefore I would not considered that "intelligent". Neuronal nets and artificial brains are another thing, but they are still at the very beginning.</p><p>"superhuman intelligence" there might be some limit to intelligence, I don't mean memory and computation speed, I mean the understanding that if "A implies B" then "non B implies non A"... once an artificial brain understands that concept there's not so much more to understand about it.</p></htmltext>
<tokenext>What I mean by that is that I have n't see yet any sign of generic intelligence -- otherwise if you consider programs that beat human at chess " intelligent " that has already happened .
But those programs can not even solve a tic-tac-toe game because they do n't actually " understand " what 's going on .
They have some inputs some processing and they give you an output , if you vary the input and the problem or if you expect a different type of output the program would not know how to adjust , therefore I would not considered that " intelligent " .
Neuronal nets and artificial brains are another thing , but they are still at the very beginning .
" superhuman intelligence " there might be some limit to intelligence , I do n't mean memory and computation speed , I mean the understanding that if " A implies B " then " non B implies non A " ... once an artificial brain understands that concept there 's not so much more to understand about it .</tokentext>
<sentencetext>What I mean by that is that I haven't see yet any sign of generic intelligence -- otherwise if you consider programs that beat human at chess "intelligent" that has already happened.
But those programs cannot even solve a tic-tac-toe game because they don't actually "understand" what's going on.
They have some inputs some processing and they give you an output, if you vary the input and the problem or if you expect a different type of output the program would not know how to adjust, therefore I would not considered that "intelligent".
Neuronal nets and artificial brains are another thing, but they are still at the very beginning.
"superhuman intelligence" there might be some limit to intelligence, I don't mean memory and computation speed, I mean the understanding that if "A implies B" then "non B implies non A"... once an artificial brain understands that concept there's not so much more to understand about it.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093556</id>
	<title>Re:Depends on which human being</title>
	<author>potpie</author>
	<datestamp>1265032560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>So it's running Linux?</htmltext>
<tokenext>So it 's running Linux ?</tokentext>
<sentencetext>So it's running Linux?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093166</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092780</id>
	<title>Were Close</title>
	<author>Anonymous</author>
	<datestamp>1265028540000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I mean, we've already got them replacing our action heroes. Keanu has been doing that for nearly 20 years.</p></htmltext>
<tokenext>I mean , we 've already got them replacing our action heroes .
Keanu has been doing that for nearly 20 years .</tokentext>
<sentencetext>I mean, we've already got them replacing our action heroes.
Keanu has been doing that for nearly 20 years.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098652</id>
	<title>Even in chess it's not clear</title>
	<author>igomaniac</author>
	<datestamp>1265898900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The computers just brute-force chess, but a team of human+computer (look up "Advanced Chess") is much stronger than just a computer. This is because humans have much better chess intuition, so if they can rely on the computer to double-check that they haven't missed some tactic twenty moves deep in the position they can do really well. It's a bit like using a calculator when you do maths, you can avoid basic errors and do the basic calculations faster but you still need to come up with a plan of how you arrive at the solution.</p><p>Needless to say, in the game of Go computers are still pretty pathetic.</p></htmltext>
<tokenext>The computers just brute-force chess , but a team of human + computer ( look up " Advanced Chess " ) is much stronger than just a computer .
This is because humans have much better chess intuition , so if they can rely on the computer to double-check that they have n't missed some tactic twenty moves deep in the position they can do really well .
It 's a bit like using a calculator when you do maths , you can avoid basic errors and do the basic calculations faster but you still need to come up with a plan of how you arrive at the solution.Needless to say , in the game of Go computers are still pretty pathetic .</tokentext>
<sentencetext>The computers just brute-force chess, but a team of human+computer (look up "Advanced Chess") is much stronger than just a computer.
This is because humans have much better chess intuition, so if they can rely on the computer to double-check that they haven't missed some tactic twenty moves deep in the position they can do really well.
It's a bit like using a calculator when you do maths, you can avoid basic errors and do the basic calculations faster but you still need to come up with a plan of how you arrive at the solution.Needless to say, in the game of Go computers are still pretty pathetic.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093198</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093448</id>
	<title>Think about money and energy</title>
	<author>Colin Smith</author>
	<datestamp>1265031960000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Start with money.</p><p>You're a bank. You're going to loan out some money for what reason? To get more back. So, the recipient of a loan has to supply something of value. Say, a house.</p><p>What happens when the supply of houses matches or exceeds the demand? Houses become valueless. You can't make money supplying them. The bank isn't going to make that loan.</p><p>So for our existing monetary system, demand must never be satisfied. We must never build enough houses for all the homeless, and if too many are built, they have to be knocked down.</p><p><a href="http://online.wsj.com/article/SB120709588093381941.html?mod=todays\_columnists" title="wsj.com">http://online.wsj.com/article/SB120709588093381941.html?mod=todays\_columnists</a> [wsj.com]<br><a href="http://www.usnews.com/money/blogs/fresh-greens/2009/05/05/what-a-waste-new-homes-demolished-by-bank" title="usnews.com">http://www.usnews.com/money/blogs/fresh-greens/2009/05/05/what-a-waste-new-homes-demolished-by-bank</a> [usnews.com]</p><p>When the supply of work meets demand, work becomes valueless.</p><p>Which leads us to energy.</p><p>The reason we "modernise" is to reduce costs. A human costs say 20k/year. A digging machine costs 250k, with one driver can replace 10 humans digging trenches. Payback after the 1st year.The cost of the energy for the digger is lower than the costs the humans have to pay to live, plus the humans have a 30\% tax on top.</p><p>So economically, it makes sense to get rid of humans and replace them with machines. In fact, our monetary system pretty much enforces it.</p><p>If all human labour can be carried out by machines, then humans will have no money. i.e. Universal machine labour will destroy capitalism and the monetary system. Banks etc. What will happen is the system will devolve into a 2 class system of owners and the owned. Creditors and debtors. Neofeudalism.</p><p>You should read Silvio Gesell. He came to a similar conclusion. That if demand is ever satisfied, capitalism stops functioning. (This is why there will always be poverty. It's required by the money system.)</p><p>Ofcourse as energy itself (easy energy resources like coal, oil, gas) becomes more scarce and expensive, the running of a 10,000 cpu cluster to emulate 100 billion human neurons is likely to consume quite a lot of energy.</p></htmltext>
<tokenext>Start with money.You 're a bank .
You 're going to loan out some money for what reason ?
To get more back .
So , the recipient of a loan has to supply something of value .
Say , a house.What happens when the supply of houses matches or exceeds the demand ?
Houses become valueless .
You ca n't make money supplying them .
The bank is n't going to make that loan.So for our existing monetary system , demand must never be satisfied .
We must never build enough houses for all the homeless , and if too many are built , they have to be knocked down.http : //online.wsj.com/article/SB120709588093381941.html ? mod = todays \ _columnists [ wsj.com ] http : //www.usnews.com/money/blogs/fresh-greens/2009/05/05/what-a-waste-new-homes-demolished-by-bank [ usnews.com ] When the supply of work meets demand , work becomes valueless.Which leads us to energy.The reason we " modernise " is to reduce costs .
A human costs say 20k/year .
A digging machine costs 250k , with one driver can replace 10 humans digging trenches .
Payback after the 1st year.The cost of the energy for the digger is lower than the costs the humans have to pay to live , plus the humans have a 30 \ % tax on top.So economically , it makes sense to get rid of humans and replace them with machines .
In fact , our monetary system pretty much enforces it.If all human labour can be carried out by machines , then humans will have no money .
i.e. Universal machine labour will destroy capitalism and the monetary system .
Banks etc .
What will happen is the system will devolve into a 2 class system of owners and the owned .
Creditors and debtors .
Neofeudalism.You should read Silvio Gesell .
He came to a similar conclusion .
That if demand is ever satisfied , capitalism stops functioning .
( This is why there will always be poverty .
It 's required by the money system .
) Ofcourse as energy itself ( easy energy resources like coal , oil , gas ) becomes more scarce and expensive , the running of a 10,000 cpu cluster to emulate 100 billion human neurons is likely to consume quite a lot of energy .</tokentext>
<sentencetext>Start with money.You're a bank.
You're going to loan out some money for what reason?
To get more back.
So, the recipient of a loan has to supply something of value.
Say, a house.What happens when the supply of houses matches or exceeds the demand?
Houses become valueless.
You can't make money supplying them.
The bank isn't going to make that loan.So for our existing monetary system, demand must never be satisfied.
We must never build enough houses for all the homeless, and if too many are built, they have to be knocked down.http://online.wsj.com/article/SB120709588093381941.html?mod=todays\_columnists [wsj.com]http://www.usnews.com/money/blogs/fresh-greens/2009/05/05/what-a-waste-new-homes-demolished-by-bank [usnews.com]When the supply of work meets demand, work becomes valueless.Which leads us to energy.The reason we "modernise" is to reduce costs.
A human costs say 20k/year.
A digging machine costs 250k, with one driver can replace 10 humans digging trenches.
Payback after the 1st year.The cost of the energy for the digger is lower than the costs the humans have to pay to live, plus the humans have a 30\% tax on top.So economically, it makes sense to get rid of humans and replace them with machines.
In fact, our monetary system pretty much enforces it.If all human labour can be carried out by machines, then humans will have no money.
i.e. Universal machine labour will destroy capitalism and the monetary system.
Banks etc.
What will happen is the system will devolve into a 2 class system of owners and the owned.
Creditors and debtors.
Neofeudalism.You should read Silvio Gesell.
He came to a similar conclusion.
That if demand is ever satisfied, capitalism stops functioning.
(This is why there will always be poverty.
It's required by the money system.
)Ofcourse as energy itself (easy energy resources like coal, oil, gas) becomes more scarce and expensive, the running of a 10,000 cpu cluster to emulate 100 billion human neurons is likely to consume quite a lot of energy.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092968</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096028</id>
	<title>The Money Issue</title>
	<author>Anonymous</author>
	<datestamp>1265048100000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>you know this wouldn't be considered a bad thing, if we just stopped worrying about the outdated monetary system and moved to say...something more intelligent?</p><p>Here's an idea: http://www.thevenusproject.com/a-new-social-design/essay</p></htmltext>
<tokenext>you know this would n't be considered a bad thing , if we just stopped worrying about the outdated monetary system and moved to say...something more intelligent ? Here 's an idea : http : //www.thevenusproject.com/a-new-social-design/essay</tokentext>
<sentencetext>you know this wouldn't be considered a bad thing, if we just stopped worrying about the outdated monetary system and moved to say...something more intelligent?Here's an idea: http://www.thevenusproject.com/a-new-social-design/essay</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764</id>
	<title>No way.</title>
	<author>Bruce Perens</author>
	<datestamp>1265028480000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>Oh come on. I don't even have a computer that can pick up stuff in my room
and organize it without prior input, and nobody does, and that would not be
close to a general AI when it happens.
</p><p>
They're really assuming that the technology will go from zero to
sixty in 20 years. Which they assumed 20 years ago, too, and it didn't
happen. Meanwhile, nobody has any significant understanding of what
consciousness is. Now, it might be that a true AI computer doesn't need to be
<i>conscious,</i> but we still don't know enough about it to fake it. We
also have no system that can on demand form its own symbolic system to deal
with a rich and arbitrary set of inputs similar to those conveyed by the human
senses.
</p><p>
Compare this to things that actually have been achieved: We had the
mathematical theory of computation at least 100 years before there was a
mechanical or electronic system that would practically execute it (Babbage
didn't get his system built). We had the physical theory for space travel
that far back, too.</p><p>

We know very little about how a mind works, except that it keeps turning out to
be more complicated than we expected.
</p><p>
So, I'm really very dubious.</p></htmltext>
<tokenext>Oh come on .
I do n't even have a computer that can pick up stuff in my room and organize it without prior input , and nobody does , and that would not be close to a general AI when it happens .
They 're really assuming that the technology will go from zero to sixty in 20 years .
Which they assumed 20 years ago , too , and it did n't happen .
Meanwhile , nobody has any significant understanding of what consciousness is .
Now , it might be that a true AI computer does n't need to be conscious , but we still do n't know enough about it to fake it .
We also have no system that can on demand form its own symbolic system to deal with a rich and arbitrary set of inputs similar to those conveyed by the human senses .
Compare this to things that actually have been achieved : We had the mathematical theory of computation at least 100 years before there was a mechanical or electronic system that would practically execute it ( Babbage did n't get his system built ) .
We had the physical theory for space travel that far back , too .
We know very little about how a mind works , except that it keeps turning out to be more complicated than we expected .
So , I 'm really very dubious .</tokentext>
<sentencetext>Oh come on.
I don't even have a computer that can pick up stuff in my room
and organize it without prior input, and nobody does, and that would not be
close to a general AI when it happens.
They're really assuming that the technology will go from zero to
sixty in 20 years.
Which they assumed 20 years ago, too, and it didn't
happen.
Meanwhile, nobody has any significant understanding of what
consciousness is.
Now, it might be that a true AI computer doesn't need to be
conscious, but we still don't know enough about it to fake it.
We
also have no system that can on demand form its own symbolic system to deal
with a rich and arbitrary set of inputs similar to those conveyed by the human
senses.
Compare this to things that actually have been achieved: We had the
mathematical theory of computation at least 100 years before there was a
mechanical or electronic system that would practically execute it (Babbage
didn't get his system built).
We had the physical theory for space travel
that far back, too.
We know very little about how a mind works, except that it keeps turning out to
be more complicated than we expected.
So, I'm really very dubious.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31100188</id>
	<title>Already Here</title>
	<author>hduff</author>
	<datestamp>1265906640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>The Magic 8-Ball</htmltext>
<tokenext>The Magic 8-Ball</tokentext>
<sentencetext>The Magic 8-Ball</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093056</id>
	<title>Tennessee</title>
	<author>Anonymous</author>
	<datestamp>1265029860000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I live in Tennessee, AI surpassed human intelligence years ago.</p></htmltext>
<tokenext>I live in Tennessee , AI surpassed human intelligence years ago .</tokentext>
<sentencetext>I live in Tennessee, AI surpassed human intelligence years ago.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093198</id>
	<title>Depends on the test.</title>
	<author>Anonymous</author>
	<datestamp>1265030580000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>If the test is chess, then there are AIs that surpass the vast majority of the human race.</p><p>If the test were, let's say, safely navigating through Manhattan using the same visual signs and signals that a pedestrian would, there isn't anything close to even a relatively helpless human being.</p><p>If the test is understanding language, same thing.  Ditto for cognitive flexibility, the ability to generalize mental skills learned in one situation to a different one.</p><p>Of course many of these kinds of "tests" I'm proposing are very human-centric.  But narrow tests of intelligence are very algorithm-centric.  The narrower the test, the more relatively "intelligent" AI will be.</p><p>Here's an interesting thought, I think.  How long will it be before an AI is created that is capable of outscoring the average human on some IQ test -- given the necessary visual inputs and robotic "hands" to take the test?  I don't think that's very far off. I wouldn't be surprised to see it in my lifetime.  I'd be surprised to see a pedestrian robot who could navigate Manhattan as well as the average human in my lifetime, or who could take leadership and teamwork skills learned in a military job and apply them to a civilian job without reprogramming by a human.</p></htmltext>
<tokenext>If the test is chess , then there are AIs that surpass the vast majority of the human race.If the test were , let 's say , safely navigating through Manhattan using the same visual signs and signals that a pedestrian would , there is n't anything close to even a relatively helpless human being.If the test is understanding language , same thing .
Ditto for cognitive flexibility , the ability to generalize mental skills learned in one situation to a different one.Of course many of these kinds of " tests " I 'm proposing are very human-centric .
But narrow tests of intelligence are very algorithm-centric .
The narrower the test , the more relatively " intelligent " AI will be.Here 's an interesting thought , I think .
How long will it be before an AI is created that is capable of outscoring the average human on some IQ test -- given the necessary visual inputs and robotic " hands " to take the test ?
I do n't think that 's very far off .
I would n't be surprised to see it in my lifetime .
I 'd be surprised to see a pedestrian robot who could navigate Manhattan as well as the average human in my lifetime , or who could take leadership and teamwork skills learned in a military job and apply them to a civilian job without reprogramming by a human .</tokentext>
<sentencetext>If the test is chess, then there are AIs that surpass the vast majority of the human race.If the test were, let's say, safely navigating through Manhattan using the same visual signs and signals that a pedestrian would, there isn't anything close to even a relatively helpless human being.If the test is understanding language, same thing.
Ditto for cognitive flexibility, the ability to generalize mental skills learned in one situation to a different one.Of course many of these kinds of "tests" I'm proposing are very human-centric.
But narrow tests of intelligence are very algorithm-centric.
The narrower the test, the more relatively "intelligent" AI will be.Here's an interesting thought, I think.
How long will it be before an AI is created that is capable of outscoring the average human on some IQ test -- given the necessary visual inputs and robotic "hands" to take the test?
I don't think that's very far off.
I wouldn't be surprised to see it in my lifetime.
I'd be surprised to see a pedestrian robot who could navigate Manhattan as well as the average human in my lifetime, or who could take leadership and teamwork skills learned in a military job and apply them to a civilian job without reprogramming by a human.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097260</id>
	<title>Re:Current computation models not enough</title>
	<author>daver00</author>
	<datestamp>1265883180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>I am pretty much sure that the current computational models. I.e. Turing Machine are not enough to explain the human mind.</p></div> </blockquote><p>I'm pretty sure you are right, and I'm pretty sure I strongly agree with you. Goedel showed that the human mind is capable of conceiving things beyond the capability of any formal system devised by the human mind, Turing showed that not Turing-complete machine can can solve all problems. Between these two giants and the vast body of work that has grown from their findings, I remain utterly skeptical that AI is even something humans can do. There is something eerily abstract about the whole notion.</p><p>I always love to pose this question to people on this topic and similar ones: How can you understand how your mind works from an outside perspective. You can't, its a logical fallacy, everything, absolutely every thought you have ever is the consequence of a human mind at work. You cannot escape that framework and objectively analyse it, and I suspect if you could, you would find something that is completely orthogonal to what your mind is capable of understanding.</p><p>To me, a human mind conjuring up a working AI is akin to the human mind actually visualising higher dimensions: its too abstract.</p><p>We are flat landers in the world of AI, we'll never achieve it.</p></div>
	</htmltext>
<tokenext>I am pretty much sure that the current computational models .
I.e. Turing Machine are not enough to explain the human mind .
I 'm pretty sure you are right , and I 'm pretty sure I strongly agree with you .
Goedel showed that the human mind is capable of conceiving things beyond the capability of any formal system devised by the human mind , Turing showed that not Turing-complete machine can can solve all problems .
Between these two giants and the vast body of work that has grown from their findings , I remain utterly skeptical that AI is even something humans can do .
There is something eerily abstract about the whole notion.I always love to pose this question to people on this topic and similar ones : How can you understand how your mind works from an outside perspective .
You ca n't , its a logical fallacy , everything , absolutely every thought you have ever is the consequence of a human mind at work .
You can not escape that framework and objectively analyse it , and I suspect if you could , you would find something that is completely orthogonal to what your mind is capable of understanding.To me , a human mind conjuring up a working AI is akin to the human mind actually visualising higher dimensions : its too abstract.We are flat landers in the world of AI , we 'll never achieve it .</tokentext>
<sentencetext>I am pretty much sure that the current computational models.
I.e. Turing Machine are not enough to explain the human mind.
I'm pretty sure you are right, and I'm pretty sure I strongly agree with you.
Goedel showed that the human mind is capable of conceiving things beyond the capability of any formal system devised by the human mind, Turing showed that not Turing-complete machine can can solve all problems.
Between these two giants and the vast body of work that has grown from their findings, I remain utterly skeptical that AI is even something humans can do.
There is something eerily abstract about the whole notion.I always love to pose this question to people on this topic and similar ones: How can you understand how your mind works from an outside perspective.
You can't, its a logical fallacy, everything, absolutely every thought you have ever is the consequence of a human mind at work.
You cannot escape that framework and objectively analyse it, and I suspect if you could, you would find something that is completely orthogonal to what your mind is capable of understanding.To me, a human mind conjuring up a working AI is akin to the human mind actually visualising higher dimensions: its too abstract.We are flat landers in the world of AI, we'll never achieve it.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093112</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093778</id>
	<title>Re:No way.</title>
	<author>Anonymous</author>
	<datestamp>1265033460000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>&gt; Oh come on. I don't even have a computer that can pick up stuff in my room and organize it without prior input, and nobody does, and that would not be close to a general AI when it happens.</p><p>Riiiight<nobr> <wbr></nobr>... Coz the main priority of AI research is, of course, build a computer to clean your room<nobr> <wbr></nobr>...<br>AI might not be that advanced, but that's no excuse to raise such idiotic objections.</p></htmltext>
<tokenext>&gt; Oh come on .
I do n't even have a computer that can pick up stuff in my room and organize it without prior input , and nobody does , and that would not be close to a general AI when it happens.Riiiight ... Coz the main priority of AI research is , of course , build a computer to clean your room ...AI might not be that advanced , but that 's no excuse to raise such idiotic objections .</tokentext>
<sentencetext>&gt; Oh come on.
I don't even have a computer that can pick up stuff in my room and organize it without prior input, and nobody does, and that would not be close to a general AI when it happens.Riiiight ... Coz the main priority of AI research is, of course, build a computer to clean your room ...AI might not be that advanced, but that's no excuse to raise such idiotic objections.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31100454</id>
	<title>Re:Turing, not long. The rest... wait a long time.</title>
	<author>Anonymous</author>
	<datestamp>1265907720000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>Heck, even simple programs like Eliza had some humans fooled decades ago.</p></div><p>You obviously have only a very vague idea of what a Turing test is.</p><p><div class="quote"><p>On the other hand, while advances in computing power have been impressive, advances in "AI" have been far less so. They have been extremely rare, in fact. I do not know of a single major breakthrough that has been made in the last 20 years.</p></div><p>Yes, I have no trouble believing that you don't know of any.</p><p><div class="quote"><p>While the relatively vast computing power available today can make certain programs <b>seem</b> pretty smart, that is still not the same as artificial intelligence, which I believe is a major qualitative difference, not just quantitative. [...]</p></div><p>So assuming a real AI exists, how could it ever convince you of its existence? Presumably by seeming dumb rather than smart?</p></div>
	</htmltext>
<tokenext>Heck , even simple programs like Eliza had some humans fooled decades ago.You obviously have only a very vague idea of what a Turing test is.On the other hand , while advances in computing power have been impressive , advances in " AI " have been far less so .
They have been extremely rare , in fact .
I do not know of a single major breakthrough that has been made in the last 20 years.Yes , I have no trouble believing that you do n't know of any.While the relatively vast computing power available today can make certain programs seem pretty smart , that is still not the same as artificial intelligence , which I believe is a major qualitative difference , not just quantitative .
[ ... ] So assuming a real AI exists , how could it ever convince you of its existence ?
Presumably by seeming dumb rather than smart ?</tokentext>
<sentencetext>Heck, even simple programs like Eliza had some humans fooled decades ago.You obviously have only a very vague idea of what a Turing test is.On the other hand, while advances in computing power have been impressive, advances in "AI" have been far less so.
They have been extremely rare, in fact.
I do not know of a single major breakthrough that has been made in the last 20 years.Yes, I have no trouble believing that you don't know of any.While the relatively vast computing power available today can make certain programs seem pretty smart, that is still not the same as artificial intelligence, which I believe is a major qualitative difference, not just quantitative.
[...]So assuming a real AI exists, how could it ever convince you of its existence?
Presumably by seeming dumb rather than smart?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093082</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093430</id>
	<title>if you are going to ask experts...</title>
	<author>wakim1618</author>
	<datestamp>1265031840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>This should be a slashdot poll. When will we have an AI that can debug VB better than humans:

(1) Wha, VB can never be truly debugged!

(2) Soon. April 1, 2010.

(3) Within 5 years when I am old enough to move out of my parents' basement and go to college.

(4) 2112 and two minutes later, the AI will become smart enough to know better and outsource the job to low productivity humans.</htmltext>
<tokenext>This should be a slashdot poll .
When will we have an AI that can debug VB better than humans : ( 1 ) Wha , VB can never be truly debugged !
( 2 ) Soon .
April 1 , 2010 .
( 3 ) Within 5 years when I am old enough to move out of my parents ' basement and go to college .
( 4 ) 2112 and two minutes later , the AI will become smart enough to know better and outsource the job to low productivity humans .</tokentext>
<sentencetext>This should be a slashdot poll.
When will we have an AI that can debug VB better than humans:

(1) Wha, VB can never be truly debugged!
(2) Soon.
April 1, 2010.
(3) Within 5 years when I am old enough to move out of my parents' basement and go to college.
(4) 2112 and two minutes later, the AI will become smart enough to know better and outsource the job to low productivity humans.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31109628</id>
	<title>My Resume</title>
	<author>neurospyder</author>
	<datestamp>1265911020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Autodidactic<br>30 year old<br>willing to take IQ tests<br>Formal Degree A.S. Computer Information Systems<br>Would like to be paid to study the psychology of human computer interaction and document my learning process.</p><p>
&nbsp; <a href="http://www.amazon.com/Psychology-Human-Computer-Interaction-Stuart-Card/dp/0898598591" title="amazon.com" rel="nofollow">http://www.amazon.com/Psychology-Human-Computer-Interaction-Stuart-Card/dp/0898598591</a> [amazon.com]</p><p>Can build Linux From Scratch and tinker with it.<br>Somewhat familiar with AIML, well I haven't edited it much, just a little trial and error.</p><p>"RE: Resume" at neurospyder at gmail dot com</p><p>I don't want to move.</p></htmltext>
<tokenext>Autodidactic30 year oldwilling to take IQ testsFormal Degree A.S. Computer Information SystemsWould like to be paid to study the psychology of human computer interaction and document my learning process .
  http : //www.amazon.com/Psychology-Human-Computer-Interaction-Stuart-Card/dp/0898598591 [ amazon.com ] Can build Linux From Scratch and tinker with it.Somewhat familiar with AIML , well I have n't edited it much , just a little trial and error .
" RE : Resume " at neurospyder at gmail dot comI do n't want to move .</tokentext>
<sentencetext>Autodidactic30 year oldwilling to take IQ testsFormal Degree A.S. Computer Information SystemsWould like to be paid to study the psychology of human computer interaction and document my learning process.
  http://www.amazon.com/Psychology-Human-Computer-Interaction-Stuart-Card/dp/0898598591 [amazon.com]Can build Linux From Scratch and tinker with it.Somewhat familiar with AIML, well I haven't edited it much, just a little trial and error.
"RE: Resume" at neurospyder at gmail dot comI don't want to move.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097256</id>
	<title>spooky prescient</title>
	<author>epine</author>
	<datestamp>1265883120000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>Which <i>three men in a tub</i> assumed 20 years ago, too, and it didn't happen.</p></div><p>The first rule of thumb is never to believe a prediction by anyone who writes grant applications for a livelihood, which covers most living scientists.</p><p>Computers will acquire a patchwork of amazing abilities over the next three decades.  I'm not sure it's particularly useful to measure this against a three year old.  Right now we're further along on "fly airplane" than "tie shoes". If there was a Turing test to declare whether a task is simple or not, humans would fail.</p><p>A Google data center with 100,000 CPU nodes is already pretty far up the cognitive scale, but it's not a form of cognition we've bothered to define as such.  The most important intelligence will be assisted intelligence: what humans accomplish in collaboration with their tools.  The tools will become increasingly amazing, at first on a patchwork basis, and then the seams will become increasingly unclear.</p><p>Right now social networking sites predict what we might find interesting on fairly trivial low-dimensional criteria.  Netflix must be the all-time champion of the drunken I-fought-with-my-wife-tonight 1-5 rating.  Could the data set possibly be less rich or more corrupt?  And already we squeeze something out.  Just wait until the computers know everything about us and the ability of the computer/network to anticipate our cognitive whims becomes spooky prescient.</p><p>On another front, some of the fruits of neurology are now coming on line.  I have no idea whether this stuff works or not.  Typical how we trip over our own shoelaces, trying to get speech recognition to work *before* mastering auditory grouping, which strikes me as far more fundamental.</p><p>From <a href="http://www.audience.com/about.html" title="audience.com">Audience</a> [audience.com] based on research by Lloyd Watts</p><p><div class="quote"><p>Audience is the first company to deliver a commercial product based on the science of [a]uditory [s]cene [a]nalysis, which entails the grouping of components in a complex mixture of sound into sources. Just as the human auditory system can readily ignore background noises while focusing on a voice of interest, [our stuff achieves] noise suppression up to 30 dB for both stationary and non-stationary noise sources to provide [adjective of awesomeness] voice quality within even the [pertinent superlative].</p></div></div>
	</htmltext>
<tokenext>Which three men in a tub assumed 20 years ago , too , and it did n't happen.The first rule of thumb is never to believe a prediction by anyone who writes grant applications for a livelihood , which covers most living scientists.Computers will acquire a patchwork of amazing abilities over the next three decades .
I 'm not sure it 's particularly useful to measure this against a three year old .
Right now we 're further along on " fly airplane " than " tie shoes " .
If there was a Turing test to declare whether a task is simple or not , humans would fail.A Google data center with 100,000 CPU nodes is already pretty far up the cognitive scale , but it 's not a form of cognition we 've bothered to define as such .
The most important intelligence will be assisted intelligence : what humans accomplish in collaboration with their tools .
The tools will become increasingly amazing , at first on a patchwork basis , and then the seams will become increasingly unclear.Right now social networking sites predict what we might find interesting on fairly trivial low-dimensional criteria .
Netflix must be the all-time champion of the drunken I-fought-with-my-wife-tonight 1-5 rating .
Could the data set possibly be less rich or more corrupt ?
And already we squeeze something out .
Just wait until the computers know everything about us and the ability of the computer/network to anticipate our cognitive whims becomes spooky prescient.On another front , some of the fruits of neurology are now coming on line .
I have no idea whether this stuff works or not .
Typical how we trip over our own shoelaces , trying to get speech recognition to work * before * mastering auditory grouping , which strikes me as far more fundamental.From Audience [ audience.com ] based on research by Lloyd WattsAudience is the first company to deliver a commercial product based on the science of [ a ] uditory [ s ] cene [ a ] nalysis , which entails the grouping of components in a complex mixture of sound into sources .
Just as the human auditory system can readily ignore background noises while focusing on a voice of interest , [ our stuff achieves ] noise suppression up to 30 dB for both stationary and non-stationary noise sources to provide [ adjective of awesomeness ] voice quality within even the [ pertinent superlative ] .</tokentext>
<sentencetext>Which three men in a tub assumed 20 years ago, too, and it didn't happen.The first rule of thumb is never to believe a prediction by anyone who writes grant applications for a livelihood, which covers most living scientists.Computers will acquire a patchwork of amazing abilities over the next three decades.
I'm not sure it's particularly useful to measure this against a three year old.
Right now we're further along on "fly airplane" than "tie shoes".
If there was a Turing test to declare whether a task is simple or not, humans would fail.A Google data center with 100,000 CPU nodes is already pretty far up the cognitive scale, but it's not a form of cognition we've bothered to define as such.
The most important intelligence will be assisted intelligence: what humans accomplish in collaboration with their tools.
The tools will become increasingly amazing, at first on a patchwork basis, and then the seams will become increasingly unclear.Right now social networking sites predict what we might find interesting on fairly trivial low-dimensional criteria.
Netflix must be the all-time champion of the drunken I-fought-with-my-wife-tonight 1-5 rating.
Could the data set possibly be less rich or more corrupt?
And already we squeeze something out.
Just wait until the computers know everything about us and the ability of the computer/network to anticipate our cognitive whims becomes spooky prescient.On another front, some of the fruits of neurology are now coming on line.
I have no idea whether this stuff works or not.
Typical how we trip over our own shoelaces, trying to get speech recognition to work *before* mastering auditory grouping, which strikes me as far more fundamental.From Audience [audience.com] based on research by Lloyd WattsAudience is the first company to deliver a commercial product based on the science of [a]uditory [s]cene [a]nalysis, which entails the grouping of components in a complex mixture of sound into sources.
Just as the human auditory system can readily ignore background noises while focusing on a voice of interest, [our stuff achieves] noise suppression up to 30 dB for both stationary and non-stationary noise sources to provide [adjective of awesomeness] voice quality within even the [pertinent superlative].
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098572</id>
	<title>Anonymous Coward</title>
	<author>Anonymous</author>
	<datestamp>1265898420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Hahaha! AI has been the biggest disappointment ever since. All these promises like AI matching humans etc. we have heard before (about 30 years ago) in one way or another. Nothing came true. AI is based on the assumption that the brain is a biological computer which is just a hypothesis. The failure of AI so far is a hint that this hypothesis might be wrong.</p></htmltext>
<tokenext>Hahaha !
AI has been the biggest disappointment ever since .
All these promises like AI matching humans etc .
we have heard before ( about 30 years ago ) in one way or another .
Nothing came true .
AI is based on the assumption that the brain is a biological computer which is just a hypothesis .
The failure of AI so far is a hint that this hypothesis might be wrong .</tokentext>
<sentencetext>Hahaha!
AI has been the biggest disappointment ever since.
All these promises like AI matching humans etc.
we have heard before (about 30 years ago) in one way or another.
Nothing came true.
AI is based on the assumption that the brain is a biological computer which is just a hypothesis.
The failure of AI so far is a hint that this hypothesis might be wrong.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099204</id>
	<title>Re:The Turing Test</title>
	<author>ENIGMAwastaken</author>
	<datestamp>1265901840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Finally, someone saying something sensible about the Turing Test.
<br> 
It's an absurdity that serious AI research say things like "we need to make our computer stupider, so it can pass an arbitrary test".  These people are supposed to be smart, but here they telling the world that they think the best method of making an intelligent computer is to artificially handicap to be bad at math, make stupid, illogical reasoning choices, have poor memory, etc. just so it can pass as a human.
<br> 
The utter inanity of that thought is just amazing: the way to make a computer more intelligent is to severely limit it.
<br> 
Not to mention the dozens of other problems the Turing test runs into, eg. the fact that dogs and horses and dolphins all seem rather intelligent, but would fail a Turing Test, that human calculators like autistic savants, or John von Neumann (he might know something about computers!) would fail.  John von Neumann could instantly multiple 2 5 digit numbers in his head when he was 4.  He would fail the Turing test, ergo he's not intelligent or conscious.  Someone should have told Mrs. von Neumann!</htmltext>
<tokenext>Finally , someone saying something sensible about the Turing Test .
It 's an absurdity that serious AI research say things like " we need to make our computer stupider , so it can pass an arbitrary test " .
These people are supposed to be smart , but here they telling the world that they think the best method of making an intelligent computer is to artificially handicap to be bad at math , make stupid , illogical reasoning choices , have poor memory , etc .
just so it can pass as a human .
The utter inanity of that thought is just amazing : the way to make a computer more intelligent is to severely limit it .
Not to mention the dozens of other problems the Turing test runs into , eg .
the fact that dogs and horses and dolphins all seem rather intelligent , but would fail a Turing Test , that human calculators like autistic savants , or John von Neumann ( he might know something about computers !
) would fail .
John von Neumann could instantly multiple 2 5 digit numbers in his head when he was 4 .
He would fail the Turing test , ergo he 's not intelligent or conscious .
Someone should have told Mrs. von Neumann !</tokentext>
<sentencetext>Finally, someone saying something sensible about the Turing Test.
It's an absurdity that serious AI research say things like "we need to make our computer stupider, so it can pass an arbitrary test".
These people are supposed to be smart, but here they telling the world that they think the best method of making an intelligent computer is to artificially handicap to be bad at math, make stupid, illogical reasoning choices, have poor memory, etc.
just so it can pass as a human.
The utter inanity of that thought is just amazing: the way to make a computer more intelligent is to severely limit it.
Not to mention the dozens of other problems the Turing test runs into, eg.
the fact that dogs and horses and dolphins all seem rather intelligent, but would fail a Turing Test, that human calculators like autistic savants, or John von Neumann (he might know something about computers!
) would fail.
John von Neumann could instantly multiple 2 5 digit numbers in his head when he was 4.
He would fail the Turing test, ergo he's not intelligent or conscious.
Someone should have told Mrs. von Neumann!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093108</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093890</id>
	<title>Re:Definitions</title>
	<author>Anonymous</author>
	<datestamp>1265033880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>People generally neglect the fact that humans can perform amazing calculations nearly instantaneously.  Take for example catching a thrown baseball - you brain can very quickly calculate where you need to put your glove to catch the ball, including variations caused by wind, ball rotation, variations in terrain, etc.


Creating an AI that could properly take in all the inputs necessary to calculate the flight path of the ball is pretty difficult - add in the calculations required to move a human body and attached glove to the appropriate place at the appropriate time across variable terrain.......


Humans (hell even dogs in this example) are more impressive than you might think.</htmltext>
<tokenext>People generally neglect the fact that humans can perform amazing calculations nearly instantaneously .
Take for example catching a thrown baseball - you brain can very quickly calculate where you need to put your glove to catch the ball , including variations caused by wind , ball rotation , variations in terrain , etc .
Creating an AI that could properly take in all the inputs necessary to calculate the flight path of the ball is pretty difficult - add in the calculations required to move a human body and attached glove to the appropriate place at the appropriate time across variable terrain...... . Humans ( hell even dogs in this example ) are more impressive than you might think .</tokentext>
<sentencetext>People generally neglect the fact that humans can perform amazing calculations nearly instantaneously.
Take for example catching a thrown baseball - you brain can very quickly calculate where you need to put your glove to catch the ball, including variations caused by wind, ball rotation, variations in terrain, etc.
Creating an AI that could properly take in all the inputs necessary to calculate the flight path of the ball is pretty difficult - add in the calculations required to move a human body and attached glove to the appropriate place at the appropriate time across variable terrain.......


Humans (hell even dogs in this example) are more impressive than you might think.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099608</id>
	<title>Re:The obvious solution</title>
	<author>Neurotoxic666</author>
	<datestamp>1265904000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Wrong. I bet you did not really mean that, but you've been modded "interesting". I guess I'm arguing more with the mods than with you...

<br> <br>In fact, a mechanical copy of yourself would exist after your suicide. But YOU would not be there. YOU would be gone with your meatbag. You cannot escape your body and become a machine.

<br> <br>You may live the illusion of doing so, you may have a machine that replicates what you are, you may dream or believe in whatever you like to -- but in the end, you're just stuck in there, in that meatbag. And that conscious You will leave along with the pile of flesh that made it happen.</htmltext>
<tokenext>Wrong .
I bet you did not really mean that , but you 've been modded " interesting " .
I guess I 'm arguing more with the mods than with you.. . In fact , a mechanical copy of yourself would exist after your suicide .
But YOU would not be there .
YOU would be gone with your meatbag .
You can not escape your body and become a machine .
You may live the illusion of doing so , you may have a machine that replicates what you are , you may dream or believe in whatever you like to -- but in the end , you 're just stuck in there , in that meatbag .
And that conscious You will leave along with the pile of flesh that made it happen .</tokentext>
<sentencetext>Wrong.
I bet you did not really mean that, but you've been modded "interesting".
I guess I'm arguing more with the mods than with you...

 In fact, a mechanical copy of yourself would exist after your suicide.
But YOU would not be there.
YOU would be gone with your meatbag.
You cannot escape your body and become a machine.
You may live the illusion of doing so, you may have a machine that replicates what you are, you may dream or believe in whatever you like to -- but in the end, you're just stuck in there, in that meatbag.
And that conscious You will leave along with the pile of flesh that made it happen.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092788</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097000</id>
	<title>Re:What is AI anyway?</title>
	<author>Anonymous</author>
	<datestamp>1265879160000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Are you assuming that a human brain can create random numbers? I can't believe any human can produce numbers that pass statistical tests for randomness.</p></htmltext>
<tokenext>Are you assuming that a human brain can create random numbers ?
I ca n't believe any human can produce numbers that pass statistical tests for randomness .</tokentext>
<sentencetext>Are you assuming that a human brain can create random numbers?
I can't believe any human can produce numbers that pass statistical tests for randomness.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093916</id>
	<title>From what I can tell of my peers...</title>
	<author>joocemann</author>
	<datestamp>1265034000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>... it shouldn't be long.</p></htmltext>
<tokenext>... it should n't be long .</tokentext>
<sentencetext>... it shouldn't be long.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31100812</id>
	<title>re: perhaps an intelligent life form</title>
	<author>hittjw</author>
	<datestamp>1265909460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Well the sooner AI surpasses human intelligence the more safe people will feel with hovering armed robots commanding their town.  Who needs people anyway. One day computers can comment on news stories sharing their own opinions if they aren't already.</htmltext>
<tokenext>Well the sooner AI surpasses human intelligence the more safe people will feel with hovering armed robots commanding their town .
Who needs people anyway .
One day computers can comment on news stories sharing their own opinions if they are n't already .</tokentext>
<sentencetext>Well the sooner AI surpasses human intelligence the more safe people will feel with hovering armed robots commanding their town.
Who needs people anyway.
One day computers can comment on news stories sharing their own opinions if they aren't already.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095272</id>
	<title>Re:No way.</title>
	<author>earthforce\_1</author>
	<datestamp>1265042340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It's actually quite educational to look back 100 years or more, and see exactly where they thought we would be:</p><p>To view the "future" as it was seen in each decade since 1870:<br><a href="http://www.paleofuture.com/" title="paleofuture.com">http://www.paleofuture.com/</a> [paleofuture.com]</p><p>I get a kick out of the postcard with the pilot flying by a sky bar grabbing a drink "to go".  Never mind drunk driving, hammered is the only way to fly!</p></htmltext>
<tokenext>It 's actually quite educational to look back 100 years or more , and see exactly where they thought we would be : To view the " future " as it was seen in each decade since 1870 : http : //www.paleofuture.com/ [ paleofuture.com ] I get a kick out of the postcard with the pilot flying by a sky bar grabbing a drink " to go " .
Never mind drunk driving , hammered is the only way to fly !</tokentext>
<sentencetext>It's actually quite educational to look back 100 years or more, and see exactly where they thought we would be:To view the "future" as it was seen in each decade since 1870:http://www.paleofuture.com/ [paleofuture.com]I get a kick out of the postcard with the pilot flying by a sky bar grabbing a drink "to go".
Never mind drunk driving, hammered is the only way to fly!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096492</id>
	<title>I'll just say this...</title>
	<author>Anonymous</author>
	<datestamp>1265052960000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I will *love* my Saber Marionette<nobr> <wbr></nobr>;), yeah!<nobr> <wbr></nobr>:P</p></htmltext>
<tokenext>I will * love * my Saber Marionette ; ) , yeah !
: P</tokentext>
<sentencetext>I will *love* my Saber Marionette ;), yeah!
:P</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098512</id>
	<title>There are 21 AI experts?</title>
	<author>cvtan</author>
	<datestamp>1265897820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Didn't there used to be 21 THOUSAND AI experts?  I'll wait until there is only one AI expert - a non-human running Bioshock 3.</htmltext>
<tokenext>Did n't there used to be 21 THOUSAND AI experts ?
I 'll wait until there is only one AI expert - a non-human running Bioshock 3 .</tokentext>
<sentencetext>Didn't there used to be 21 THOUSAND AI experts?
I'll wait until there is only one AI expert - a non-human running Bioshock 3.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097662</id>
	<title>Re:This touches on a problem I have</title>
	<author>u38cg</author>
	<datestamp>1265887800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I am very concerned about these new-fangled things they have, "factories" and "production lines".  Apparently they massively reduce the need for labour per unit of output.  I am worried that if these entities are allowed to exist, thousands or millions of poor people will no longer have jobs and will end up being too poor to eat properly, afford housing, and have healthcare.</htmltext>
<tokenext>I am very concerned about these new-fangled things they have , " factories " and " production lines " .
Apparently they massively reduce the need for labour per unit of output .
I am worried that if these entities are allowed to exist , thousands or millions of poor people will no longer have jobs and will end up being too poor to eat properly , afford housing , and have healthcare .</tokentext>
<sentencetext>I am very concerned about these new-fangled things they have, "factories" and "production lines".
Apparently they massively reduce the need for labour per unit of output.
I am worried that if these entities are allowed to exist, thousands or millions of poor people will no longer have jobs and will end up being too poor to eat properly, afford housing, and have healthcare.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092968</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093496</id>
	<title>Next Month</title>
	<author>Anonymous</author>
	<datestamp>1265032320000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>:)</p></htmltext>
<tokenext>: )</tokentext>
<sentencetext>:)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093392</id>
	<title>Re:What is AI anyway?</title>
	<author>Anonymous</author>
	<datestamp>1265031660000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Can you pick a truly random number? If so, what makes you think that didn't come from a deterministic process fed with "random" data? You're supposing that intelligence is something that computers can't emulate; calculation with a 'soul'. What makes you think that?</p></htmltext>
<tokenext>Can you pick a truly random number ?
If so , what makes you think that did n't come from a deterministic process fed with " random " data ?
You 're supposing that intelligence is something that computers ca n't emulate ; calculation with a 'soul' .
What makes you think that ?</tokentext>
<sentencetext>Can you pick a truly random number?
If so, what makes you think that didn't come from a deterministic process fed with "random" data?
You're supposing that intelligence is something that computers can't emulate; calculation with a 'soul'.
What makes you think that?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094558</id>
	<title>Re:Current computation models not enough</title>
	<author>Anonymous</author>
	<datestamp>1265037180000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I think that's problematic to say. The humain brain itself is a neural network. Are we, too, mere Turing Machines unable to approach the human mind?</p></htmltext>
<tokenext>I think that 's problematic to say .
The humain brain itself is a neural network .
Are we , too , mere Turing Machines unable to approach the human mind ?</tokentext>
<sentencetext>I think that's problematic to say.
The humain brain itself is a neural network.
Are we, too, mere Turing Machines unable to approach the human mind?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093112</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31102748</id>
	<title>Re:This touches on a problem I have</title>
	<author>Anonymous</author>
	<datestamp>1265917920000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>Having 10's million of people too poor to eat properly, afford housing, and healthcare is a bad thing and would ultimately drag down the country.</p></div><p>You aren't that far away from this scenario now...</p></div>
	</htmltext>
<tokenext>Having 10 's million of people too poor to eat properly , afford housing , and healthcare is a bad thing and would ultimately drag down the country.You are n't that far away from this scenario now.. .</tokentext>
<sentencetext>Having 10's million of people too poor to eat properly, afford housing, and healthcare is a bad thing and would ultimately drag down the country.You aren't that far away from this scenario now...
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092968</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095942</id>
	<title>Why - Why not?</title>
	<author>Anonymous</author>
	<datestamp>1265047380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Why would we not develop an AI that surpasses us, to the extent of its no longer being stupider than "The Human Race&#169;" - we are discussing a race of monkeys here who have managed to bring weapons technology to a high enough standard to wipe out their entire planet a few times over, this before they even got to meet the neighbours.</p><p>Its not as if that particular fucking intelligence hurdle ("Smarter - Than Humans!") is set particularly high, or?</p></htmltext>
<tokenext>Why would we not develop an AI that surpasses us , to the extent of its no longer being stupider than " The Human Race   " - we are discussing a race of monkeys here who have managed to bring weapons technology to a high enough standard to wipe out their entire planet a few times over , this before they even got to meet the neighbours.Its not as if that particular fucking intelligence hurdle ( " Smarter - Than Humans !
" ) is set particularly high , or ?</tokentext>
<sentencetext>Why would we not develop an AI that surpasses us, to the extent of its no longer being stupider than "The Human Race©" - we are discussing a race of monkeys here who have managed to bring weapons technology to a high enough standard to wipe out their entire planet a few times over, this before they even got to meet the neighbours.Its not as if that particular fucking intelligence hurdle ("Smarter - Than Humans!
") is set particularly high, or?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096102</id>
	<title>when will AI equal human intelligence?</title>
	<author>Anonymous</author>
	<datestamp>1265048580000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>When AI wants to watch a couple of hours of television a day then it will be equal to human intelligence.</htmltext>
<tokenext>When AI wants to watch a couple of hours of television a day then it will be equal to human intelligence .</tokentext>
<sentencetext>When AI wants to watch a couple of hours of television a day then it will be equal to human intelligence.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095834</id>
	<title>Re:Let's see.</title>
	<author>hellop2</author>
	<datestamp>1265046660000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext>I believe the quote is: &ldquo;The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.&rdquo;<br> <br>

You see, his point was that computers will never have human-like intelligence.  Humans don't think in binary.  Humanity exists in part because humans are able to forget...  that we all age evokes compassion.  Children are special to humans because they cannot be easily reproduced.  Computers can be mass-produced.<br> <br>

These are some reasons why a computer will never have a need for intelligence in the way we think of it.   Therefore computer "intelligence" is fundamentally different than human intelligence.  IT seems to me that you're missing Dijkstra's point altogether.  It's not about speed.  It's about applying a term incorrectly.  The speed of submarines is interesting.  They can go faster than fish.  Dive deeper than fish.  See underwater.  But they don't "swim" because swimming is something that living things do. Likewise, computers could solve problems faster than humans.  Recall more information.  Derive better solutions.  But "thinking" is something that living things do.<br> <br>

Also, you didn't even link to the <a href="http://www.google.com/search?hl=en&amp;safe=off&amp;client=firefox-a&amp;rls=com.ubuntu\%3Aen-US\%3Aofficial&amp;hs=LCm&amp;num=50&amp;q=\%E2\%80\%9CThe+question+of+whether+a+computer+can+think+is+no+more+interesting+than+the+question+of+whether+a+submarine+can+swim.\%E2\%80\%9D&amp;aq=f&amp;aqi=&amp;oq=" title="google.com" rel="nofollow">quote.</a> [google.com]</div>
	</htmltext>
<tokenext>I believe the quote is :    The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.    You see , his point was that computers will never have human-like intelligence .
Humans do n't think in binary .
Humanity exists in part because humans are able to forget... that we all age evokes compassion .
Children are special to humans because they can not be easily reproduced .
Computers can be mass-produced .
These are some reasons why a computer will never have a need for intelligence in the way we think of it .
Therefore computer " intelligence " is fundamentally different than human intelligence .
IT seems to me that you 're missing Dijkstra 's point altogether .
It 's not about speed .
It 's about applying a term incorrectly .
The speed of submarines is interesting .
They can go faster than fish .
Dive deeper than fish .
See underwater .
But they do n't " swim " because swimming is something that living things do .
Likewise , computers could solve problems faster than humans .
Recall more information .
Derive better solutions .
But " thinking " is something that living things do .
Also , you did n't even link to the quote .
[ google.com ]</tokentext>
<sentencetext>I believe the quote is: “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” 

You see, his point was that computers will never have human-like intelligence.
Humans don't think in binary.
Humanity exists in part because humans are able to forget...  that we all age evokes compassion.
Children are special to humans because they cannot be easily reproduced.
Computers can be mass-produced.
These are some reasons why a computer will never have a need for intelligence in the way we think of it.
Therefore computer "intelligence" is fundamentally different than human intelligence.
IT seems to me that you're missing Dijkstra's point altogether.
It's not about speed.
It's about applying a term incorrectly.
The speed of submarines is interesting.
They can go faster than fish.
Dive deeper than fish.
See underwater.
But they don't "swim" because swimming is something that living things do.
Likewise, computers could solve problems faster than humans.
Recall more information.
Derive better solutions.
But "thinking" is something that living things do.
Also, you didn't even link to the quote.
[google.com]
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092804</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31107112</id>
	<title>Re:This touches on a problem I have</title>
	<author>Maxo-Texas</author>
	<datestamp>1265891520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>We are very close to machines that can...</p><p>pick up a particular object out of a container of mixed unsorted objects.<br>put that object in a particular place at a particular alignment.</p><p>So...<br>Most "stocker" type jobs are probably at great risk.<br>I can see a little fleet of stocking robots loading the grocery stores every night very soon (next 5 years).</p></htmltext>
<tokenext>We are very close to machines that can...pick up a particular object out of a container of mixed unsorted objects.put that object in a particular place at a particular alignment.So...Most " stocker " type jobs are probably at great risk.I can see a little fleet of stocking robots loading the grocery stores every night very soon ( next 5 years ) .</tokentext>
<sentencetext>We are very close to machines that can...pick up a particular object out of a container of mixed unsorted objects.put that object in a particular place at a particular alignment.So...Most "stocker" type jobs are probably at great risk.I can see a little fleet of stocking robots loading the grocery stores every night very soon (next 5 years).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092968</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093524</id>
	<title>Re:Such balogna.</title>
	<author>lena\_10326</author>
	<datestamp>1265032440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I think there's a difference between intelligence and consciousness. We can think of an algorithm as being created to make decisions by selecting the option with the best outcome so even though it's not consciously self-aware it can make decisions we would recognize as intelligent.</p><blockquote><div><p>we don't even really know what human intelligence is</p></div></blockquote><p>It doesn't matter. We still measure it. IQ tests; exams testing knowledge, memory, comprehension skills; etc. There are lots of things we measure but don't understand.</p></div>
	</htmltext>
<tokenext>I think there 's a difference between intelligence and consciousness .
We can think of an algorithm as being created to make decisions by selecting the option with the best outcome so even though it 's not consciously self-aware it can make decisions we would recognize as intelligent.we do n't even really know what human intelligence isIt does n't matter .
We still measure it .
IQ tests ; exams testing knowledge , memory , comprehension skills ; etc .
There are lots of things we measure but do n't understand .</tokentext>
<sentencetext>I think there's a difference between intelligence and consciousness.
We can think of an algorithm as being created to make decisions by selecting the option with the best outcome so even though it's not consciously self-aware it can make decisions we would recognize as intelligent.we don't even really know what human intelligence isIt doesn't matter.
We still measure it.
IQ tests; exams testing knowledge, memory, comprehension skills; etc.
There are lots of things we measure but don't understand.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092812</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092804</id>
	<title>Let's see.</title>
	<author>johncadengo</author>
	<datestamp>1265028660000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>To play off a famous <a href="http://en.wikipedia.org/wiki/Edsger\_W.\_Dijkstra" title="wikipedia.org">Edsger Dijkstra</a> [wikipedia.org] quote, the question of when AI will surpass human intelligence is just about as interesting as asking when submarines will swim faster than fish...</p></div>
	</htmltext>
<tokenext>To play off a famous Edsger Dijkstra [ wikipedia.org ] quote , the question of when AI will surpass human intelligence is just about as interesting as asking when submarines will swim faster than fish.. .</tokentext>
<sentencetext>To play off a famous Edsger Dijkstra [wikipedia.org] quote, the question of when AI will surpass human intelligence is just about as interesting as asking when submarines will swim faster than fish...
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31100616</id>
	<title>Re:Definitions</title>
	<author>eric-x</author>
	<datestamp>1265908440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It calculates nothing. It's just a guess based on previous observations. it takes a few initial conditions into concideration and then pulls a trajectory from memory, during the flight it does this over and over again to correct for the errors it made in its guess.</p></htmltext>
<tokenext>It calculates nothing .
It 's just a guess based on previous observations .
it takes a few initial conditions into concideration and then pulls a trajectory from memory , during the flight it does this over and over again to correct for the errors it made in its guess .</tokentext>
<sentencetext>It calculates nothing.
It's just a guess based on previous observations.
it takes a few initial conditions into concideration and then pulls a trajectory from memory, during the flight it does this over and over again to correct for the errors it made in its guess.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093890</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095934</id>
	<title>Quantum and Nietzsche</title>
	<author>Anonymous</author>
	<datestamp>1265047320000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Definitely quantum ways are closer to AI than abacus and powerful computer.</p><p>I hear alot of us against them scenarios mentioned here. But I see it different. Its more like we augment ourselves and become more powerful humans thanks to any developments.</p><p>Nietzsche in 'Thus spoke Zarathustra' is constantly going on about the superman and how the way forward for humans is to overcome ourselves and in so doing bring about our own downfall.</p><p>The downfall of human 1.0 is the morphing to human 1.01, 1.02 etc ad nauseum.</p><p>and in terms of a no-work future...haha, we could have that now, but we compete fiercly because we are alive, and living things try always to win.</p><p>Ie, if I had a twice as efficient factory, I wouldnt fire half my workers; Id keept the same staffing levels and keep the same 50 hour weeks, Id make twice as much product and get twice as much revenue. So would my competitors.</p><p>Finally, I see biotech being a promising avenue into human augmentation.</p><p>Despite all this, the best way to create an intelligent autonomous entity is still to have sex until pregnancy results.</p></htmltext>
<tokenext>Definitely quantum ways are closer to AI than abacus and powerful computer.I hear alot of us against them scenarios mentioned here .
But I see it different .
Its more like we augment ourselves and become more powerful humans thanks to any developments.Nietzsche in 'Thus spoke Zarathustra ' is constantly going on about the superman and how the way forward for humans is to overcome ourselves and in so doing bring about our own downfall.The downfall of human 1.0 is the morphing to human 1.01 , 1.02 etc ad nauseum.and in terms of a no-work future...haha , we could have that now , but we compete fiercly because we are alive , and living things try always to win.Ie , if I had a twice as efficient factory , I wouldnt fire half my workers ; Id keept the same staffing levels and keep the same 50 hour weeks , Id make twice as much product and get twice as much revenue .
So would my competitors.Finally , I see biotech being a promising avenue into human augmentation.Despite all this , the best way to create an intelligent autonomous entity is still to have sex until pregnancy results .</tokentext>
<sentencetext>Definitely quantum ways are closer to AI than abacus and powerful computer.I hear alot of us against them scenarios mentioned here.
But I see it different.
Its more like we augment ourselves and become more powerful humans thanks to any developments.Nietzsche in 'Thus spoke Zarathustra' is constantly going on about the superman and how the way forward for humans is to overcome ourselves and in so doing bring about our own downfall.The downfall of human 1.0 is the morphing to human 1.01, 1.02 etc ad nauseum.and in terms of a no-work future...haha, we could have that now, but we compete fiercly because we are alive, and living things try always to win.Ie, if I had a twice as efficient factory, I wouldnt fire half my workers; Id keept the same staffing levels and keep the same 50 hour weeks, Id make twice as much product and get twice as much revenue.
So would my competitors.Finally, I see biotech being a promising avenue into human augmentation.Despite all this, the best way to create an intelligent autonomous entity is still to have sex until pregnancy results.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098588</id>
	<title>Re:The obvious solution</title>
	<author>f3r</author>
	<datestamp>1265898480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>nice idea. Have you heard of the no-cloning theorem in quantum mechanics? let's hope cognitive functions are purely nonquantum...</htmltext>
<tokenext>nice idea .
Have you heard of the no-cloning theorem in quantum mechanics ?
let 's hope cognitive functions are purely nonquantum.. .</tokentext>
<sentencetext>nice idea.
Have you heard of the no-cloning theorem in quantum mechanics?
let's hope cognitive functions are purely nonquantum...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092788</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095556</id>
	<title>Human-level AI is pointless</title>
	<author>jedwidz</author>
	<datestamp>1265044380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The biggest problem I see with human-level AI is that it's largely pointless.</p><p>We're approaching seven billion people and an epidemic of unemployment.  Human intelligence is in massive oversupply.  So why is anyone going to fork out the necessary billions on R&amp;D over the coming decades to develop something that's going to be worthless?</p><p>I don't use my brain most days, and I have a job.</p></htmltext>
<tokenext>The biggest problem I see with human-level AI is that it 's largely pointless.We 're approaching seven billion people and an epidemic of unemployment .
Human intelligence is in massive oversupply .
So why is anyone going to fork out the necessary billions on R&amp;D over the coming decades to develop something that 's going to be worthless ? I do n't use my brain most days , and I have a job .</tokentext>
<sentencetext>The biggest problem I see with human-level AI is that it's largely pointless.We're approaching seven billion people and an epidemic of unemployment.
Human intelligence is in massive oversupply.
So why is anyone going to fork out the necessary billions on R&amp;D over the coming decades to develop something that's going to be worthless?I don't use my brain most days, and I have a job.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095850</id>
	<title>Define "intelligence"</title>
	<author>shadowbearer</author>
	<datestamp>1265046780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>
&nbsp; I'm coming into this late, so it's probably already been said.  At least I hope so.</p><p>
&nbsp; Until we can define what the term intelligence *means* we won't be able to put a date on when AI achieves it.</p><p>
&nbsp; We can't even pin down a solid definition for our own *species*.  To imagine that we can do so is arrogance in the extreme use of the term.  (reference this post for one example)</p><p>
&nbsp; SB</p><p>
&nbsp;</p></htmltext>
<tokenext>  I 'm coming into this late , so it 's probably already been said .
At least I hope so .
  Until we can define what the term intelligence * means * we wo n't be able to put a date on when AI achieves it .
  We ca n't even pin down a solid definition for our own * species * .
To imagine that we can do so is arrogance in the extreme use of the term .
( reference this post for one example )   SB  </tokentext>
<sentencetext>
  I'm coming into this late, so it's probably already been said.
At least I hope so.
  Until we can define what the term intelligence *means* we won't be able to put a date on when AI achieves it.
  We can't even pin down a solid definition for our own *species*.
To imagine that we can do so is arrogance in the extreme use of the term.
(reference this post for one example)
  SB
 </sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093166</id>
	<title>Depends on which human being</title>
	<author>syousef</author>
	<datestamp>1265030340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>My toaster is smarter than some people I know</p></htmltext>
<tokenext>My toaster is smarter than some people I know</tokentext>
<sentencetext>My toaster is smarter than some people I know</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098058</id>
	<title>AI is not match</title>
	<author>Anonymous</author>
	<datestamp>1265892480000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>AI is not match for human stupidity.</p></htmltext>
<tokenext>AI is not match for human stupidity .</tokentext>
<sentencetext>AI is not match for human stupidity.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097930</id>
	<title>The way the brain works is very simple...</title>
	<author>master\_p</author>
	<datestamp>1265890620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>...and that's why it escapes us.</p><p>All the brain does is pattern matching: the input is matched against stored experiences, and when the best match is found, responses are triggered and sent out to the body.</p><p>The brain does not run a sequence of commands in order to make a computation; it simply matches the input to stored data and creates an output.</p><p>There is plenty of evidence to support the above conclusion:</p><p>1) we need to learn things.</p><p>2) we don't actually know anything; we select the case that better matches our survival. This explains religion and superstition, by the way.</p><p>3) when we see danger (a fire, for example), we have trained our brains to increase our adrenaline, which helps us escape the dangerous situation. Babies don't have this training so as that the put their hands onto stoves and things that burn.</p><p>4) we can't do arithmetic like a computer does; we can only add basic numbers, and then we can follow a procedure to do more complicated stuff. That's why we have to keep the computations in paper, because our brain is useless in computing things the way a computer does.</p><p>Now to the problem of AI...we won't achieve AI like ours ever, if all we believe that a computer can act like a brain; a brain works differently than a computer. A computer executes a series of predefined instructions, the brain does pattern matching. Until we, as humanity, realize this, we are never going to make truly AI.</p></htmltext>
<tokenext>...and that 's why it escapes us.All the brain does is pattern matching : the input is matched against stored experiences , and when the best match is found , responses are triggered and sent out to the body.The brain does not run a sequence of commands in order to make a computation ; it simply matches the input to stored data and creates an output.There is plenty of evidence to support the above conclusion : 1 ) we need to learn things.2 ) we do n't actually know anything ; we select the case that better matches our survival .
This explains religion and superstition , by the way.3 ) when we see danger ( a fire , for example ) , we have trained our brains to increase our adrenaline , which helps us escape the dangerous situation .
Babies do n't have this training so as that the put their hands onto stoves and things that burn.4 ) we ca n't do arithmetic like a computer does ; we can only add basic numbers , and then we can follow a procedure to do more complicated stuff .
That 's why we have to keep the computations in paper , because our brain is useless in computing things the way a computer does.Now to the problem of AI...we wo n't achieve AI like ours ever , if all we believe that a computer can act like a brain ; a brain works differently than a computer .
A computer executes a series of predefined instructions , the brain does pattern matching .
Until we , as humanity , realize this , we are never going to make truly AI .</tokentext>
<sentencetext>...and that's why it escapes us.All the brain does is pattern matching: the input is matched against stored experiences, and when the best match is found, responses are triggered and sent out to the body.The brain does not run a sequence of commands in order to make a computation; it simply matches the input to stored data and creates an output.There is plenty of evidence to support the above conclusion:1) we need to learn things.2) we don't actually know anything; we select the case that better matches our survival.
This explains religion and superstition, by the way.3) when we see danger (a fire, for example), we have trained our brains to increase our adrenaline, which helps us escape the dangerous situation.
Babies don't have this training so as that the put their hands onto stoves and things that burn.4) we can't do arithmetic like a computer does; we can only add basic numbers, and then we can follow a procedure to do more complicated stuff.
That's why we have to keep the computations in paper, because our brain is useless in computing things the way a computer does.Now to the problem of AI...we won't achieve AI like ours ever, if all we believe that a computer can act like a brain; a brain works differently than a computer.
A computer executes a series of predefined instructions, the brain does pattern matching.
Until we, as humanity, realize this, we are never going to make truly AI.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093566</id>
	<title>Amen!</title>
	<author>Weezul</author>
	<datestamp>1265032620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I've frequently observed that AI researchers exaggerate their successes so grossly as to be <a href="http://hplusmagazine.com/articles/ai/build-optimal-scientist-then-retire" title="hplusmagazine.com">outright lying.</a> [hplusmagazine.com]  A little excess optimism is mild by comparison.</p><p>We actually have many parallel approaches towards producing super-human intelligences :</p><p>(1) education and psychology --  Any professional mathematician will tell you about people of unimaginable cleverness and productivity, but they only rarely tell you about all the extraordinarily clever normal mathematicians that will just never produce anything nearly so remarkable.  Imagine if raise the percentage of the population with the focus, drive, work ethic, and good habits of say Terrance Tao.  Just not scaring away the women helps too!</p><p>(2) implants and drugs  --  We'll clearly have the ability to enhance the brian well before possessing the ability to build one, especially given this technology has medical applications.  We know some academics are already using drugs to them focus or improve memory recall.</p><p>(3) parallelization  --  We currently build the largest super computers by running parallel algorithms across numerous smaller systems, but the algorithms used by the human brain are already fairly parallel and adaptable.  So we could develop implants and methodologies for parallelizing human mental functions such as memory or analyzing difficult problems, such technology could be developed by working on brian implants rodent or primate models.</p></htmltext>
<tokenext>I 've frequently observed that AI researchers exaggerate their successes so grossly as to be outright lying .
[ hplusmagazine.com ] A little excess optimism is mild by comparison.We actually have many parallel approaches towards producing super-human intelligences : ( 1 ) education and psychology -- Any professional mathematician will tell you about people of unimaginable cleverness and productivity , but they only rarely tell you about all the extraordinarily clever normal mathematicians that will just never produce anything nearly so remarkable .
Imagine if raise the percentage of the population with the focus , drive , work ethic , and good habits of say Terrance Tao .
Just not scaring away the women helps too !
( 2 ) implants and drugs -- We 'll clearly have the ability to enhance the brian well before possessing the ability to build one , especially given this technology has medical applications .
We know some academics are already using drugs to them focus or improve memory recall .
( 3 ) parallelization -- We currently build the largest super computers by running parallel algorithms across numerous smaller systems , but the algorithms used by the human brain are already fairly parallel and adaptable .
So we could develop implants and methodologies for parallelizing human mental functions such as memory or analyzing difficult problems , such technology could be developed by working on brian implants rodent or primate models .</tokentext>
<sentencetext>I've frequently observed that AI researchers exaggerate their successes so grossly as to be outright lying.
[hplusmagazine.com]  A little excess optimism is mild by comparison.We actually have many parallel approaches towards producing super-human intelligences :(1) education and psychology --  Any professional mathematician will tell you about people of unimaginable cleverness and productivity, but they only rarely tell you about all the extraordinarily clever normal mathematicians that will just never produce anything nearly so remarkable.
Imagine if raise the percentage of the population with the focus, drive, work ethic, and good habits of say Terrance Tao.
Just not scaring away the women helps too!
(2) implants and drugs  --  We'll clearly have the ability to enhance the brian well before possessing the ability to build one, especially given this technology has medical applications.
We know some academics are already using drugs to them focus or improve memory recall.
(3) parallelization  --  We currently build the largest super computers by running parallel algorithms across numerous smaller systems, but the algorithms used by the human brain are already fairly parallel and adaptable.
So we could develop implants and methodologies for parallelizing human mental functions such as memory or analyzing difficult problems, such technology could be developed by working on brian implants rodent or primate models.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095498</id>
	<title>Re:What is AI anyway?</title>
	<author>Anonymous</author>
	<datestamp>1265044020000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Given you seem knowledgeable about statistics, it's easily possible that you were "gaming" the system - not necessarily consciously, but subconsciously. Your "bias" was to make the data set appear to be random, by taking the previous results into account (something no random system should ever do). The only fair way to conduct this would be to get a number of non statistics-trained people to do the experiment.</p></htmltext>
<tokenext>Given you seem knowledgeable about statistics , it 's easily possible that you were " gaming " the system - not necessarily consciously , but subconsciously .
Your " bias " was to make the data set appear to be random , by taking the previous results into account ( something no random system should ever do ) .
The only fair way to conduct this would be to get a number of non statistics-trained people to do the experiment .</tokentext>
<sentencetext>Given you seem knowledgeable about statistics, it's easily possible that you were "gaming" the system - not necessarily consciously, but subconsciously.
Your "bias" was to make the data set appear to be random, by taking the previous results into account (something no random system should ever do).
The only fair way to conduct this would be to get a number of non statistics-trained people to do the experiment.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094264</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094174</id>
	<title>Re:The obvious solution</title>
	<author>Anonymatt</author>
	<datestamp>1265035140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Don't you realize that you die on the transporter pad?</p></htmltext>
<tokenext>Do n't you realize that you die on the transporter pad ?</tokentext>
<sentencetext>Don't you realize that you die on the transporter pad?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092788</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988</id>
	<title>What is AI anyway?</title>
	<author>Sark666</author>
	<datestamp>1265029500000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>To me the key word is artificial, depending on your interpretation of the meaning it could be simply man made, or it's fake, simulated.</p><p>Does deep blue show any intelligence?  To me, that's just good programming.  I think the intelligence of computers is a misnomer.  Their intelligence so far and has always has been nil.  Maybe that'll change, but in so many areas of technology I'm an optimist but in this regard I'm a pessimist or at least very skeptical.</p><p>A computer can't even pick a (truly) random number without being hooked up to a device feeding it random noise.</p><p>How do you program that? How does the brain choose a random number?  What's holding us back?  CPU Speed? Quantum computing? A brilliant programmer?</p><p>Wake me up when a computer can even do something as simple as pick a truly random number and I'll be impressed.</p></htmltext>
<tokenext>To me the key word is artificial , depending on your interpretation of the meaning it could be simply man made , or it 's fake , simulated.Does deep blue show any intelligence ?
To me , that 's just good programming .
I think the intelligence of computers is a misnomer .
Their intelligence so far and has always has been nil .
Maybe that 'll change , but in so many areas of technology I 'm an optimist but in this regard I 'm a pessimist or at least very skeptical.A computer ca n't even pick a ( truly ) random number without being hooked up to a device feeding it random noise.How do you program that ?
How does the brain choose a random number ?
What 's holding us back ?
CPU Speed ?
Quantum computing ?
A brilliant programmer ? Wake me up when a computer can even do something as simple as pick a truly random number and I 'll be impressed .</tokentext>
<sentencetext>To me the key word is artificial, depending on your interpretation of the meaning it could be simply man made, or it's fake, simulated.Does deep blue show any intelligence?
To me, that's just good programming.
I think the intelligence of computers is a misnomer.
Their intelligence so far and has always has been nil.
Maybe that'll change, but in so many areas of technology I'm an optimist but in this regard I'm a pessimist or at least very skeptical.A computer can't even pick a (truly) random number without being hooked up to a device feeding it random noise.How do you program that?
How does the brain choose a random number?
What's holding us back?
CPU Speed?
Quantum computing?
A brilliant programmer?Wake me up when a computer can even do something as simple as pick a truly random number and I'll be impressed.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093124</id>
	<title>donut-shaped energy sources</title>
	<author>jeko</author>
	<datestamp>1265030160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Didn't Tony Stark already solve that one?</htmltext>
<tokenext>Did n't Tony Stark already solve that one ?</tokentext>
<sentencetext>Didn't Tony Stark already solve that one?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092812</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093150</id>
	<title>Re:This touches on a problem I have</title>
	<author>inanet</author>
	<datestamp>1265030340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>werent these the same sorts of concerns that people had around the time of the industrial revolution ?

also with things such as automated assembly lines?

I think jobs will shift and adapt, with AI assistance, I imagine that the "menial" jobs of tomorrow will probably be much less "menial" than those of today, however,

If you think back to what was a menial job 100 years ago, versus today?

certainly there will be periods where people are losing jobs to machines, but this has happened in the past, just look at the number of bank employees before ATM's became popular vs After.

but in time we as a race will adapt, and within a generation it will be something to be discussed in a class,

the speed at which we adapt and change will ultimately prove this to be much less of an issue, as when the robotic replacement of menial workers becomes viable, I can garantee they wont be viable economically at first,

probably a slowly growing wave, eg, to begin with it would be menial tasks in really hazardous environments, and slowly taking more mainstream positions as the cost of the robotic workers drops.


put it another way, we already have the technology to build house painting robots, that would easily paint a house perfectly (a spray gun on rails effectively) but yet I see that Painters are still very much in demand, particularly as the cost of the machinery out weighs the benefit.


so my thoughts are that in the future we will have simply adapted, making use of the machines much as we do now.

50 years ago, who would have envisioned jobs like SAP consultant or Web Developer,

100 years ago, most people wouldnt have even vaguely comprehended most office based jobs we do today, let alone those on computers.</htmltext>
<tokenext>werent these the same sorts of concerns that people had around the time of the industrial revolution ?
also with things such as automated assembly lines ?
I think jobs will shift and adapt , with AI assistance , I imagine that the " menial " jobs of tomorrow will probably be much less " menial " than those of today , however , If you think back to what was a menial job 100 years ago , versus today ?
certainly there will be periods where people are losing jobs to machines , but this has happened in the past , just look at the number of bank employees before ATM 's became popular vs After .
but in time we as a race will adapt , and within a generation it will be something to be discussed in a class , the speed at which we adapt and change will ultimately prove this to be much less of an issue , as when the robotic replacement of menial workers becomes viable , I can garantee they wont be viable economically at first , probably a slowly growing wave , eg , to begin with it would be menial tasks in really hazardous environments , and slowly taking more mainstream positions as the cost of the robotic workers drops .
put it another way , we already have the technology to build house painting robots , that would easily paint a house perfectly ( a spray gun on rails effectively ) but yet I see that Painters are still very much in demand , particularly as the cost of the machinery out weighs the benefit .
so my thoughts are that in the future we will have simply adapted , making use of the machines much as we do now .
50 years ago , who would have envisioned jobs like SAP consultant or Web Developer , 100 years ago , most people wouldnt have even vaguely comprehended most office based jobs we do today , let alone those on computers .</tokentext>
<sentencetext>werent these the same sorts of concerns that people had around the time of the industrial revolution ?
also with things such as automated assembly lines?
I think jobs will shift and adapt, with AI assistance, I imagine that the "menial" jobs of tomorrow will probably be much less "menial" than those of today, however,

If you think back to what was a menial job 100 years ago, versus today?
certainly there will be periods where people are losing jobs to machines, but this has happened in the past, just look at the number of bank employees before ATM's became popular vs After.
but in time we as a race will adapt, and within a generation it will be something to be discussed in a class,

the speed at which we adapt and change will ultimately prove this to be much less of an issue, as when the robotic replacement of menial workers becomes viable, I can garantee they wont be viable economically at first,

probably a slowly growing wave, eg, to begin with it would be menial tasks in really hazardous environments, and slowly taking more mainstream positions as the cost of the robotic workers drops.
put it another way, we already have the technology to build house painting robots, that would easily paint a house perfectly (a spray gun on rails effectively) but yet I see that Painters are still very much in demand, particularly as the cost of the machinery out weighs the benefit.
so my thoughts are that in the future we will have simply adapted, making use of the machines much as we do now.
50 years ago, who would have envisioned jobs like SAP consultant or Web Developer,

100 years ago, most people wouldnt have even vaguely comprehended most office based jobs we do today, let alone those on computers.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092968</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097166</id>
	<title>Re:The Turing Test</title>
	<author>jonaskoelker</author>
	<datestamp>1265881320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Sexual attraction, and other emotional desires, are what drive humans beings to make scientific advancements</p></div><p>Yeah, I get laid every time I say "Ph.D. in cryptography"<nobr> <wbr></nobr>:(</p></div>
	</htmltext>
<tokenext>Sexual attraction , and other emotional desires , are what drive humans beings to make scientific advancementsYeah , I get laid every time I say " Ph.D. in cryptography " : (</tokentext>
<sentencetext>Sexual attraction, and other emotional desires, are what drive humans beings to make scientific advancementsYeah, I get laid every time I say "Ph.D. in cryptography" :(
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093108</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093020</id>
	<title>What about the humor milestone...?</title>
	<author>Anonymous</author>
	<datestamp>1265029740000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>When will AI be able to write original jokes that can make people laugh?  And how about scripting a funny TV commercial?</p></htmltext>
<tokenext>When will AI be able to write original jokes that can make people laugh ?
And how about scripting a funny TV commercial ?</tokentext>
<sentencetext>When will AI be able to write original jokes that can make people laugh?
And how about scripting a funny TV commercial?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093328</id>
	<title>Ah ah !</title>
	<author>ivan\_w</author>
	<datestamp>1265031240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Right now, I'd tend to say a house fly is many orders of magnitude more "inteligent" (whatever that means) than the most powerful of machines - in a package weighing less than a gram that is self-sustaining, flying, with superior evading capabilities.</p><p>We're not even close.. not even..</p></htmltext>
<tokenext>Right now , I 'd tend to say a house fly is many orders of magnitude more " inteligent " ( whatever that means ) than the most powerful of machines - in a package weighing less than a gram that is self-sustaining , flying , with superior evading capabilities.We 're not even close.. not even. .</tokentext>
<sentencetext>Right now, I'd tend to say a house fly is many orders of magnitude more "inteligent" (whatever that means) than the most powerful of machines - in a package weighing less than a gram that is self-sustaining, flying, with superior evading capabilities.We're not even close.. not even..</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096130</id>
	<title>Someday soon...</title>
	<author>enantiomer2000</author>
	<datestamp>1265049000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>A lot of people are criticizing these researchers but these guys are really cutting edge.  Ben Goertzel's work is really amazing.  If Ben is saying 20 years I would listen.</htmltext>
<tokenext>A lot of people are criticizing these researchers but these guys are really cutting edge .
Ben Goertzel 's work is really amazing .
If Ben is saying 20 years I would listen .</tokentext>
<sentencetext>A lot of people are criticizing these researchers but these guys are really cutting edge.
Ben Goertzel's work is really amazing.
If Ben is saying 20 years I would listen.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093158</id>
	<title>Better statistics below</title>
	<author>Anonymous</author>
	<datestamp>1265030340000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>And 100\% of a panel of experts composed of Edsger Dijkstra said that the question of whether a computer can think is no more interesting than the question of whether a submarine can swim.</p></htmltext>
<tokenext>And 100 \ % of a panel of experts composed of Edsger Dijkstra said that the question of whether a computer can think is no more interesting than the question of whether a submarine can swim .</tokentext>
<sentencetext>And 100\% of a panel of experts composed of Edsger Dijkstra said that the question of whether a computer can think is no more interesting than the question of whether a submarine can swim.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093788</id>
	<title>Re:What is AI anyway?</title>
	<author>Anonymous</author>
	<datestamp>1265033520000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>Does deep blue show any intelligence? To me, that's just good programming</p></div><p>I don't think anyone would call Deep Blue intelligent, and I don't think it was intended to play chess intelligently, so I'm not sure what your point is. It essentially ran a brute-force search on special hardware for the move that minimized its maximum loss and picked the best move it could find according to that criterion. Search techniques used in AI nowadays (such as genetic algorithms, ant colony optimization, particle swarm optimization, etc.) are much more sophisticated.</p><p><div class="quote"><p>Wake me up when a computer can even do something as simple as pick a truly random number and I'll be impressed.</p></div><p>Most humans, in fact, are not "truly" random either. I remember in my old Statistics book it demonstrated this by asking you to pick a number at the top of the next page at random. It correctly predicted the number I chose, 3. Although some people would choose 1, 2, or 4, ~3/4 of humans would pick 3.</p></div>
	</htmltext>
<tokenext>Does deep blue show any intelligence ?
To me , that 's just good programmingI do n't think anyone would call Deep Blue intelligent , and I do n't think it was intended to play chess intelligently , so I 'm not sure what your point is .
It essentially ran a brute-force search on special hardware for the move that minimized its maximum loss and picked the best move it could find according to that criterion .
Search techniques used in AI nowadays ( such as genetic algorithms , ant colony optimization , particle swarm optimization , etc .
) are much more sophisticated.Wake me up when a computer can even do something as simple as pick a truly random number and I 'll be impressed.Most humans , in fact , are not " truly " random either .
I remember in my old Statistics book it demonstrated this by asking you to pick a number at the top of the next page at random .
It correctly predicted the number I chose , 3 .
Although some people would choose 1 , 2 , or 4 , ~ 3/4 of humans would pick 3 .</tokentext>
<sentencetext>Does deep blue show any intelligence?
To me, that's just good programmingI don't think anyone would call Deep Blue intelligent, and I don't think it was intended to play chess intelligently, so I'm not sure what your point is.
It essentially ran a brute-force search on special hardware for the move that minimized its maximum loss and picked the best move it could find according to that criterion.
Search techniques used in AI nowadays (such as genetic algorithms, ant colony optimization, particle swarm optimization, etc.
) are much more sophisticated.Wake me up when a computer can even do something as simple as pick a truly random number and I'll be impressed.Most humans, in fact, are not "truly" random either.
I remember in my old Statistics book it demonstrated this by asking you to pick a number at the top of the next page at random.
It correctly predicted the number I chose, 3.
Although some people would choose 1, 2, or 4, ~3/4 of humans would pick 3.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099538</id>
	<title>I predict..</title>
	<author>hoelk</author>
	<datestamp>1265903700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I predict in 20 years computers have so many cores that we can just brute-force any problem without need for AI.</htmltext>
<tokenext>I predict in 20 years computers have so many cores that we can just brute-force any problem without need for AI .</tokentext>
<sentencetext>I predict in 20 years computers have so many cores that we can just brute-force any problem without need for AI.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092812</id>
	<title>Such balogna.</title>
	<author>sudog</author>
	<datestamp>1265028660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Ask those guys what consciousness is, and what it means to be conscious. And ask them what our brains' quantum-scale structures' purposes are.</p><p>Not a single one of these guys will give you an answer, because humans don't have the answers yet. Once we can actually define these things, then we can start making these sorts of predictions. "Superhuman" intelligence indeed.. we don't even really know what human intelligence is!</p><p>Robots running around doing human tasks, flying cars, donut-shaped energy sources that power cities, and intra-solar space travel were all things people in the 1950s predicted, too, and how close to those are we now, now that we have better-defined the problems involved?</p></htmltext>
<tokenext>Ask those guys what consciousness is , and what it means to be conscious .
And ask them what our brains ' quantum-scale structures ' purposes are.Not a single one of these guys will give you an answer , because humans do n't have the answers yet .
Once we can actually define these things , then we can start making these sorts of predictions .
" Superhuman " intelligence indeed.. we do n't even really know what human intelligence is ! Robots running around doing human tasks , flying cars , donut-shaped energy sources that power cities , and intra-solar space travel were all things people in the 1950s predicted , too , and how close to those are we now , now that we have better-defined the problems involved ?</tokentext>
<sentencetext>Ask those guys what consciousness is, and what it means to be conscious.
And ask them what our brains' quantum-scale structures' purposes are.Not a single one of these guys will give you an answer, because humans don't have the answers yet.
Once we can actually define these things, then we can start making these sorts of predictions.
"Superhuman" intelligence indeed.. we don't even really know what human intelligence is!Robots running around doing human tasks, flying cars, donut-shaped energy sources that power cities, and intra-solar space travel were all things people in the 1950s predicted, too, and how close to those are we now, now that we have better-defined the problems involved?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097554</id>
	<title>It won't because humans provide the standard</title>
	<author>Anonymous</author>
	<datestamp>1265886900000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>If ever machines make decisions, they will be the decisions which humans have programmed them to make. If by some unforeseen circumstance they make decisions which no human would want them to make then it will be an error (and that's pretty stupid).</p><p>They might add up numbers and construct cars more efficiently than humans, but if they ever use those skills to do something which humans don't want them to do, that will be pretty dumb.</p><p>One day a computer might make a decision to wipe out humanity and, it might have the capability to do just that, but that doesn't make it intelligent.</p></htmltext>
<tokenext>If ever machines make decisions , they will be the decisions which humans have programmed them to make .
If by some unforeseen circumstance they make decisions which no human would want them to make then it will be an error ( and that 's pretty stupid ) .They might add up numbers and construct cars more efficiently than humans , but if they ever use those skills to do something which humans do n't want them to do , that will be pretty dumb.One day a computer might make a decision to wipe out humanity and , it might have the capability to do just that , but that does n't make it intelligent .</tokentext>
<sentencetext>If ever machines make decisions, they will be the decisions which humans have programmed them to make.
If by some unforeseen circumstance they make decisions which no human would want them to make then it will be an error (and that's pretty stupid).They might add up numbers and construct cars more efficiently than humans, but if they ever use those skills to do something which humans don't want them to do, that will be pretty dumb.One day a computer might make a decision to wipe out humanity and, it might have the capability to do just that, but that doesn't make it intelligent.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099998</id>
	<title>Re:No way.</title>
	<author>Just Some Guy</author>
	<datestamp>1265905800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>They're really assuming that the technology will go from zero to sixty in 20 years. Which they assumed 20 years ago, too, and it didn't happen.</p></div><p>Having read <a href="http://en.wikipedia.org/wiki/The\_Singularity\_Is\_Near" title="wikipedia.org">"The Singularity Is Near"</a> [wikipedia.org], I'd have to say that Kurzweil makes a compelling case. The underlying principle of these predictions is that exponential growth is accelerating, (that is, the exponent itself is increasing). He has some <a href="http://singularity.com/charts/page64.html" title="singularity.com">shiny charts</a> [singularity.com] that illustrate his point better than I could. Whether these predictions pan out is another story, but I'd have to agree that they're not nearly as unbelievable as they sound when you first hear them.</p></div>
	</htmltext>
<tokenext>They 're really assuming that the technology will go from zero to sixty in 20 years .
Which they assumed 20 years ago , too , and it did n't happen.Having read " The Singularity Is Near " [ wikipedia.org ] , I 'd have to say that Kurzweil makes a compelling case .
The underlying principle of these predictions is that exponential growth is accelerating , ( that is , the exponent itself is increasing ) .
He has some shiny charts [ singularity.com ] that illustrate his point better than I could .
Whether these predictions pan out is another story , but I 'd have to agree that they 're not nearly as unbelievable as they sound when you first hear them .</tokentext>
<sentencetext>They're really assuming that the technology will go from zero to sixty in 20 years.
Which they assumed 20 years ago, too, and it didn't happen.Having read "The Singularity Is Near" [wikipedia.org], I'd have to say that Kurzweil makes a compelling case.
The underlying principle of these predictions is that exponential growth is accelerating, (that is, the exponent itself is increasing).
He has some shiny charts [singularity.com] that illustrate his point better than I could.
Whether these predictions pan out is another story, but I'd have to agree that they're not nearly as unbelievable as they sound when you first hear them.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093876</id>
	<title>Re:Turing, not long. The rest... wait a long time.</title>
	<author>Anonymous</author>
	<datestamp>1265033820000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>I think it is pretty widely recognized now that while it might have seemed logical in Turing's time, convincing emulation of a human being in a conversation (especially if done via terminal) does not require anything like human intelligence. Heck, even simple programs like Eliza had some humans fooled decades ago.</p></div><p>You've seriously misinterpreted Turing's point.  Eliza doesn't fool anyone who's actively trying to determine whether she's human or not.  Nor do any other current attempts at beating the Turing test.  Current competitions, between programs trying to beat it, have judges who are intentionally being lenient.</p><p>Being able to fool someone in a passing chat doesn't mean that a program has passed the Turing test.  It needs to be able to pass every conceivable test involving communication with the program - which, really, means anything that doesn't actually involve examining its internal workings.  That's still a long way off.</p></div>
	</htmltext>
<tokenext>I think it is pretty widely recognized now that while it might have seemed logical in Turing 's time , convincing emulation of a human being in a conversation ( especially if done via terminal ) does not require anything like human intelligence .
Heck , even simple programs like Eliza had some humans fooled decades ago.You 've seriously misinterpreted Turing 's point .
Eliza does n't fool anyone who 's actively trying to determine whether she 's human or not .
Nor do any other current attempts at beating the Turing test .
Current competitions , between programs trying to beat it , have judges who are intentionally being lenient.Being able to fool someone in a passing chat does n't mean that a program has passed the Turing test .
It needs to be able to pass every conceivable test involving communication with the program - which , really , means anything that does n't actually involve examining its internal workings .
That 's still a long way off .</tokentext>
<sentencetext>I think it is pretty widely recognized now that while it might have seemed logical in Turing's time, convincing emulation of a human being in a conversation (especially if done via terminal) does not require anything like human intelligence.
Heck, even simple programs like Eliza had some humans fooled decades ago.You've seriously misinterpreted Turing's point.
Eliza doesn't fool anyone who's actively trying to determine whether she's human or not.
Nor do any other current attempts at beating the Turing test.
Current competitions, between programs trying to beat it, have judges who are intentionally being lenient.Being able to fool someone in a passing chat doesn't mean that a program has passed the Turing test.
It needs to be able to pass every conceivable test involving communication with the program - which, really, means anything that doesn't actually involve examining its internal workings.
That's still a long way off.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093082</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099110</id>
	<title>Re:Turing, not long. The rest... wait a long time.</title>
	<author>Anonymous</author>
	<datestamp>1265901240000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The fundamental problem with the Turing test is that it's essentially subjective. The equation is dominated by how subtle the human observer happens to be. It's therefore (and always has been) nice armchair philosophy but is totally flawed as an investigative tool. And that's only one of the problems of AI.</p></htmltext>
<tokenext>The fundamental problem with the Turing test is that it 's essentially subjective .
The equation is dominated by how subtle the human observer happens to be .
It 's therefore ( and always has been ) nice armchair philosophy but is totally flawed as an investigative tool .
And that 's only one of the problems of AI .</tokentext>
<sentencetext>The fundamental problem with the Turing test is that it's essentially subjective.
The equation is dominated by how subtle the human observer happens to be.
It's therefore (and always has been) nice armchair philosophy but is totally flawed as an investigative tool.
And that's only one of the problems of AI.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093082</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093540</id>
	<title>What kind of jobs will there be?</title>
	<author>Anonymous</author>
	<datestamp>1265032440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>If most of the decent paying jobs will be eliminated by AI, what are the best ones remaining?</p><p>Robots will do all the restocking of the shelves and cashiers in stores, there will probably be McRobots instead of McDonalds. I was going to say robot repair, but that can be done by other robots. Repair of repair robots? Maybe psychiatrists, or customer service - something some super wealthy CEO would want to talk to an actual human for.</p><p>I've read some economists are predicting worldwide economic collapse when people are not able to trade their labor for food and housing, resulting in 50\%-80\%+ unemployment.</p></htmltext>
<tokenext>If most of the decent paying jobs will be eliminated by AI , what are the best ones remaining ? Robots will do all the restocking of the shelves and cashiers in stores , there will probably be McRobots instead of McDonalds .
I was going to say robot repair , but that can be done by other robots .
Repair of repair robots ?
Maybe psychiatrists , or customer service - something some super wealthy CEO would want to talk to an actual human for.I 've read some economists are predicting worldwide economic collapse when people are not able to trade their labor for food and housing , resulting in 50 \ % -80 \ % + unemployment .</tokentext>
<sentencetext>If most of the decent paying jobs will be eliminated by AI, what are the best ones remaining?Robots will do all the restocking of the shelves and cashiers in stores, there will probably be McRobots instead of McDonalds.
I was going to say robot repair, but that can be done by other robots.
Repair of repair robots?
Maybe psychiatrists, or customer service - something some super wealthy CEO would want to talk to an actual human for.I've read some economists are predicting worldwide economic collapse when people are not able to trade their labor for food and housing, resulting in 50\%-80\%+ unemployment.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092896</id>
	<title>Life after AI</title>
	<author>TwiztidK</author>
	<datestamp>1265029080000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext>When the computers are doing all of the intellectual work what will people do? I doubt that factory jobs would be prevalent as the employees would be replaced by robots. Will we simply laze about all day posting on Slashdot? Or, will our robot overlords kill all of us? It seems like the easy solution would be not to develop advanced AI, it's not going to develop itself...yet.</htmltext>
<tokenext>When the computers are doing all of the intellectual work what will people do ?
I doubt that factory jobs would be prevalent as the employees would be replaced by robots .
Will we simply laze about all day posting on Slashdot ?
Or , will our robot overlords kill all of us ?
It seems like the easy solution would be not to develop advanced AI , it 's not going to develop itself...yet .</tokentext>
<sentencetext>When the computers are doing all of the intellectual work what will people do?
I doubt that factory jobs would be prevalent as the employees would be replaced by robots.
Will we simply laze about all day posting on Slashdot?
Or, will our robot overlords kill all of us?
It seems like the easy solution would be not to develop advanced AI, it's not going to develop itself...yet.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092962</id>
	<title>That sound? Inevitability Mr. Anderson.</title>
	<author>headkase</author>
	<datestamp>1265029440000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext>Doubters and those who don't truly understand deny Strong AI will happen.  Let them live in their bubble for another *short* while.  The numbers, when you're talking 100Ghz processors on the horizon, are starting to get there.  Of course there are going to be critical impacts on all aspects of human society.  To minimize it perhaps we should move to the system that requires an amazing level of technology to function.  Yes, the big, bad, boogaboo of communism.  Just because people tried to make it work without having the necessary pieces doesn't mean it's old and busted.  Well, to people who aren't stupid anyway.  The inefficiency issues are greatly mitigated by using computerization technology to simply track everything and eliminate duplication of effort.  From there, with the further piece of AI, well: you program the machines so that they *like* to do the work and allow humans to *just live* without this quaint rat-race to go to every day *because it is not longer needed*.  Of course there will be wars and heartbreak along the way because of course people are dumb in holding on to their resistance to change itself, even if for the better.  And of course we won't see it cleanly because most people will also insist on conflating the issues of means with values, communism as a governing system for production and consumption doesn't *really* have anything to do with rights such as speech.  The ghost of McCarthy will sink that discussion anyway too.</htmltext>
<tokenext>Doubters and those who do n't truly understand deny Strong AI will happen .
Let them live in their bubble for another * short * while .
The numbers , when you 're talking 100Ghz processors on the horizon , are starting to get there .
Of course there are going to be critical impacts on all aspects of human society .
To minimize it perhaps we should move to the system that requires an amazing level of technology to function .
Yes , the big , bad , boogaboo of communism .
Just because people tried to make it work without having the necessary pieces does n't mean it 's old and busted .
Well , to people who are n't stupid anyway .
The inefficiency issues are greatly mitigated by using computerization technology to simply track everything and eliminate duplication of effort .
From there , with the further piece of AI , well : you program the machines so that they * like * to do the work and allow humans to * just live * without this quaint rat-race to go to every day * because it is not longer needed * .
Of course there will be wars and heartbreak along the way because of course people are dumb in holding on to their resistance to change itself , even if for the better .
And of course we wo n't see it cleanly because most people will also insist on conflating the issues of means with values , communism as a governing system for production and consumption does n't * really * have anything to do with rights such as speech .
The ghost of McCarthy will sink that discussion anyway too .</tokentext>
<sentencetext>Doubters and those who don't truly understand deny Strong AI will happen.
Let them live in their bubble for another *short* while.
The numbers, when you're talking 100Ghz processors on the horizon, are starting to get there.
Of course there are going to be critical impacts on all aspects of human society.
To minimize it perhaps we should move to the system that requires an amazing level of technology to function.
Yes, the big, bad, boogaboo of communism.
Just because people tried to make it work without having the necessary pieces doesn't mean it's old and busted.
Well, to people who aren't stupid anyway.
The inefficiency issues are greatly mitigated by using computerization technology to simply track everything and eliminate duplication of effort.
From there, with the further piece of AI, well: you program the machines so that they *like* to do the work and allow humans to *just live* without this quaint rat-race to go to every day *because it is not longer needed*.
Of course there will be wars and heartbreak along the way because of course people are dumb in holding on to their resistance to change itself, even if for the better.
And of course we won't see it cleanly because most people will also insist on conflating the issues of means with values, communism as a governing system for production and consumption doesn't *really* have anything to do with rights such as speech.
The ghost of McCarthy will sink that discussion anyway too.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099618</id>
	<title>Re:The Turing Test</title>
	<author>domatic</author>
	<datestamp>1265904060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The books are long winded and preachy but Frank Herbert covered this pretty well in Destination Void.  In that book, a ship full of clones is being sent to colonize another solar system and the exceedingly complex colony ship...which was deliberately designed that way...is run by an Organic Mental Core and a small crew of humans.  The OMC is a re-engineered human brain that has been deliberately bred, grown, and trained for this task.   The colony ship is like the OMC's body and routine aspects of maintaining the ship are handled by parts of the brain that normally control autonomous functions like breathing while the OMC can consciously do more complex repairs and maintenance with "robox units".   The work of maintaining the ship is constant and would be extraordinarily taxing if the ship has to be manually run by the human crew.  Unbeknownst to most of the human crew, the OMCs have been set up to fail.  The OMC they start out with and the backup OMCs they are carrying all go mad and either die or have to be killed to stop them from killing everybody else.</p><p>At this point the crew has to manually maintain the ship.  They are constantly having to menially balance temperatures, fluid flows, and small repairs that if neglected will quickly escalate into destructive problems.  And the ship has been deliberately designed such that it cannot be kept running very long that way.  And just sticking one of their brains or a colonist's brain into the OMC slot is unthinkable and wouldn't work anyway.  You can't put just any old brain there.  So they have to implement an AI to run the ship or die.  They were put in that position because AI research on Earth was succeeding too well in some respects and the AIs being built down there either had to be put down or did extraordinarily destructive things.</p><p>In the process, the objections in the parent post were covered.  The only model of consciousness they had was a human's so anything they built would have to be based on a human model and that meant human instincts, motivations, drives, and yes even an infancy of sorts.  They couldn't leave aspects of life like a sex drive out to "optimize" things because they only had very broad ideas of makes for consciousness so it was either all or none.</p></htmltext>
<tokenext>The books are long winded and preachy but Frank Herbert covered this pretty well in Destination Void .
In that book , a ship full of clones is being sent to colonize another solar system and the exceedingly complex colony ship...which was deliberately designed that way...is run by an Organic Mental Core and a small crew of humans .
The OMC is a re-engineered human brain that has been deliberately bred , grown , and trained for this task .
The colony ship is like the OMC 's body and routine aspects of maintaining the ship are handled by parts of the brain that normally control autonomous functions like breathing while the OMC can consciously do more complex repairs and maintenance with " robox units " .
The work of maintaining the ship is constant and would be extraordinarily taxing if the ship has to be manually run by the human crew .
Unbeknownst to most of the human crew , the OMCs have been set up to fail .
The OMC they start out with and the backup OMCs they are carrying all go mad and either die or have to be killed to stop them from killing everybody else.At this point the crew has to manually maintain the ship .
They are constantly having to menially balance temperatures , fluid flows , and small repairs that if neglected will quickly escalate into destructive problems .
And the ship has been deliberately designed such that it can not be kept running very long that way .
And just sticking one of their brains or a colonist 's brain into the OMC slot is unthinkable and would n't work anyway .
You ca n't put just any old brain there .
So they have to implement an AI to run the ship or die .
They were put in that position because AI research on Earth was succeeding too well in some respects and the AIs being built down there either had to be put down or did extraordinarily destructive things.In the process , the objections in the parent post were covered .
The only model of consciousness they had was a human 's so anything they built would have to be based on a human model and that meant human instincts , motivations , drives , and yes even an infancy of sorts .
They could n't leave aspects of life like a sex drive out to " optimize " things because they only had very broad ideas of makes for consciousness so it was either all or none .</tokentext>
<sentencetext>The books are long winded and preachy but Frank Herbert covered this pretty well in Destination Void.
In that book, a ship full of clones is being sent to colonize another solar system and the exceedingly complex colony ship...which was deliberately designed that way...is run by an Organic Mental Core and a small crew of humans.
The OMC is a re-engineered human brain that has been deliberately bred, grown, and trained for this task.
The colony ship is like the OMC's body and routine aspects of maintaining the ship are handled by parts of the brain that normally control autonomous functions like breathing while the OMC can consciously do more complex repairs and maintenance with "robox units".
The work of maintaining the ship is constant and would be extraordinarily taxing if the ship has to be manually run by the human crew.
Unbeknownst to most of the human crew, the OMCs have been set up to fail.
The OMC they start out with and the backup OMCs they are carrying all go mad and either die or have to be killed to stop them from killing everybody else.At this point the crew has to manually maintain the ship.
They are constantly having to menially balance temperatures, fluid flows, and small repairs that if neglected will quickly escalate into destructive problems.
And the ship has been deliberately designed such that it cannot be kept running very long that way.
And just sticking one of their brains or a colonist's brain into the OMC slot is unthinkable and wouldn't work anyway.
You can't put just any old brain there.
So they have to implement an AI to run the ship or die.
They were put in that position because AI research on Earth was succeeding too well in some respects and the AIs being built down there either had to be put down or did extraordinarily destructive things.In the process, the objections in the parent post were covered.
The only model of consciousness they had was a human's so anything they built would have to be based on a human model and that meant human instincts, motivations, drives, and yes even an infancy of sorts.
They couldn't leave aspects of life like a sex drive out to "optimize" things because they only had very broad ideas of makes for consciousness so it was either all or none.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093108</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096420</id>
	<title>Not!</title>
	<author>cyberzephyr</author>
	<datestamp>1265052180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Not!<nobr> <wbr></nobr>:-(  Don't do it again.</p></htmltext>
<tokenext>Not !
: - ( Do n't do it again .</tokentext>
<sentencetext>Not!
:-(  Don't do it again.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093058</id>
	<title>Research.</title>
	<author>FlyingBishop</author>
	<datestamp>1265029860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>Should we work on formal neural networks, probability theory, uncertain logic, evolutionary learning, a large hand-coded knowledge-base, mathematical theory, nonlinear dynamical systems, or an integrative design combining multiple paradigms?</p></div></blockquote><p>People really don't understand research and it's place in the world. If we knew what fields could yield AI, it would simply be engineering. Research is required. That means all of the above and the craziest ideas that pop in to our heads too, just for good measure.</p></div>
	</htmltext>
<tokenext>Should we work on formal neural networks , probability theory , uncertain logic , evolutionary learning , a large hand-coded knowledge-base , mathematical theory , nonlinear dynamical systems , or an integrative design combining multiple paradigms ? People really do n't understand research and it 's place in the world .
If we knew what fields could yield AI , it would simply be engineering .
Research is required .
That means all of the above and the craziest ideas that pop in to our heads too , just for good measure .</tokentext>
<sentencetext>Should we work on formal neural networks, probability theory, uncertain logic, evolutionary learning, a large hand-coded knowledge-base, mathematical theory, nonlinear dynamical systems, or an integrative design combining multiple paradigms?People really don't understand research and it's place in the world.
If we knew what fields could yield AI, it would simply be engineering.
Research is required.
That means all of the above and the craziest ideas that pop in to our heads too, just for good measure.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097140</id>
	<title>Re:What is AI anyway?</title>
	<author>zwei2stein</author>
	<datestamp>1265880840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>A computer can't even pick a (truly) random number without being hooked up to a device feeding it random noise.</i></p><p>Neither can you or any other human being, ever.</p></htmltext>
<tokenext>A computer ca n't even pick a ( truly ) random number without being hooked up to a device feeding it random noise.Neither can you or any other human being , ever .</tokentext>
<sentencetext>A computer can't even pick a (truly) random number without being hooked up to a device feeding it random noise.Neither can you or any other human being, ever.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31108988</id>
	<title>Re:The obvious solution</title>
	<author>Anonymous</author>
	<datestamp>1265903880000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>because creating a brain is oh, so easy<nobr> <wbr></nobr>:-)</p><p>20 years?  I think they missed about 20 zeroes...</p></htmltext>
<tokenext>because creating a brain is oh , so easy : - ) 20 years ?
I think they missed about 20 zeroes.. .</tokentext>
<sentencetext>because creating a brain is oh, so easy :-)20 years?
I think they missed about 20 zeroes...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092788</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31102668</id>
	<title>Opinions are like brains (not assholes after all)</title>
	<author>ooooli</author>
	<datestamp>1265917740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><nobr> <wbr></nobr>...everybody's got one, but sometimes you've got to wonder why<nobr> <wbr></nobr>:p
</p><p>
Seriously, where are all these strong points of view coming from? We haven't even decided what the goal is. What precisely does it mean to be intelligent? Sentient? Conscious? As long as it's ok to keep moving the goal posts, yes, it'll always be 20 years. But how come everyone falls so decisively into the "no way" and "ya-way" camps? At this point, the only valid "expert" opinion is "How the hell should I know?". Of course if you want to be considered an expert, you can't say that. Especially if your arch-nemesis expert is not willing to admit that he doesn't have a clue either...
</p><p>
Having said that, there has been huge progress in the last decades, both in understanding how the brain works and in real-world AI applications. Part of the problem is that, as soon as something begins to be useful, it ceases to be considered AI... Remember when your computer couldn't find the best route from your home to Disneyland in a split second? Remember when netflix couldn't predict (or try to predict) what movied you'd like? Remember when airplanes had to be aerodynamically stable because someone would actually have to fly them? Remember when you couldn't take a class on partial differential equations and have your computer do most of your homework? The algorithms used for those things were all considered AI once, folks...
</p></htmltext>
<tokenext>...everybody 's got one , but sometimes you 've got to wonder why : p Seriously , where are all these strong points of view coming from ?
We have n't even decided what the goal is .
What precisely does it mean to be intelligent ?
Sentient ? Conscious ?
As long as it 's ok to keep moving the goal posts , yes , it 'll always be 20 years .
But how come everyone falls so decisively into the " no way " and " ya-way " camps ?
At this point , the only valid " expert " opinion is " How the hell should I know ? " .
Of course if you want to be considered an expert , you ca n't say that .
Especially if your arch-nemesis expert is not willing to admit that he does n't have a clue either.. . Having said that , there has been huge progress in the last decades , both in understanding how the brain works and in real-world AI applications .
Part of the problem is that , as soon as something begins to be useful , it ceases to be considered AI... Remember when your computer could n't find the best route from your home to Disneyland in a split second ?
Remember when netflix could n't predict ( or try to predict ) what movied you 'd like ?
Remember when airplanes had to be aerodynamically stable because someone would actually have to fly them ?
Remember when you could n't take a class on partial differential equations and have your computer do most of your homework ?
The algorithms used for those things were all considered AI once , folks.. .</tokentext>
<sentencetext> ...everybody's got one, but sometimes you've got to wonder why :p

Seriously, where are all these strong points of view coming from?
We haven't even decided what the goal is.
What precisely does it mean to be intelligent?
Sentient? Conscious?
As long as it's ok to keep moving the goal posts, yes, it'll always be 20 years.
But how come everyone falls so decisively into the "no way" and "ya-way" camps?
At this point, the only valid "expert" opinion is "How the hell should I know?".
Of course if you want to be considered an expert, you can't say that.
Especially if your arch-nemesis expert is not willing to admit that he doesn't have a clue either...

Having said that, there has been huge progress in the last decades, both in understanding how the brain works and in real-world AI applications.
Part of the problem is that, as soon as something begins to be useful, it ceases to be considered AI... Remember when your computer couldn't find the best route from your home to Disneyland in a split second?
Remember when netflix couldn't predict (or try to predict) what movied you'd like?
Remember when airplanes had to be aerodynamically stable because someone would actually have to fly them?
Remember when you couldn't take a class on partial differential equations and have your computer do most of your homework?
The algorithms used for those things were all considered AI once, folks...
</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096406</id>
	<title>I Hope</title>
	<author>Anonymous</author>
	<datestamp>1265052000000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Never!</p><p>Even with a TB drive i don't think that.</p></htmltext>
<tokenext>Never ! Even with a TB drive i do n't think that .</tokentext>
<sentencetext>Never!Even with a TB drive i don't think that.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094182</id>
	<title>when hal 9000 comes out</title>
	<author>Joe The Dragon</author>
	<datestamp>1265035200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>when hal 9000 comes out</p></htmltext>
<tokenext>when hal 9000 comes out</tokentext>
<sentencetext>when hal 9000 comes out</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094156</id>
	<title>Re:No way.</title>
	<author>drinkypoo</author>
	<datestamp>1265035080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Barring RFID tagging everything in your room, or a breakthrough in topology computation, it might <em>take</em> a general AI to clean a room. It probably would mine...</p></htmltext>
<tokenext>Barring RFID tagging everything in your room , or a breakthrough in topology computation , it might take a general AI to clean a room .
It probably would mine.. .</tokentext>
<sentencetext>Barring RFID tagging everything in your room, or a breakthrough in topology computation, it might take a general AI to clean a room.
It probably would mine...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094542</id>
	<title>Re:What is AI anyway?</title>
	<author>Anonymous</author>
	<datestamp>1265037000000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Who says OUR brains can choose random numbers?  The mind is an incredibly complex network of computational nodes constantly being bombarded with "noise" from a boggling number of sensory elements.  The chemical reactions that power it are largely predictable.  And "truly random" numbers produced by people tend to display biases and strong patterns rather than random behavior.  So it sounds to me like a computer's PRNG is the more effective solution while remaining considerably less complex.</p><p>Random numbers have little to do with intelligence.</p></htmltext>
<tokenext>Who says OUR brains can choose random numbers ?
The mind is an incredibly complex network of computational nodes constantly being bombarded with " noise " from a boggling number of sensory elements .
The chemical reactions that power it are largely predictable .
And " truly random " numbers produced by people tend to display biases and strong patterns rather than random behavior .
So it sounds to me like a computer 's PRNG is the more effective solution while remaining considerably less complex.Random numbers have little to do with intelligence .</tokentext>
<sentencetext>Who says OUR brains can choose random numbers?
The mind is an incredibly complex network of computational nodes constantly being bombarded with "noise" from a boggling number of sensory elements.
The chemical reactions that power it are largely predictable.
And "truly random" numbers produced by people tend to display biases and strong patterns rather than random behavior.
So it sounds to me like a computer's PRNG is the more effective solution while remaining considerably less complex.Random numbers have little to do with intelligence.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092724</id>
	<title>When?</title>
	<author>Anonymous</author>
	<datestamp>1265028300000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext>Never.</htmltext>
<tokenext>Never .</tokentext>
<sentencetext>Never.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093132</id>
	<title>Re:Such balogna.</title>
	<author>MichaelSmith</author>
	<datestamp>1265030280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I don't believe consciousness exists (at least as anything unique to intelligence) and I don't believe we are as smart as we think we are.</p></htmltext>
<tokenext>I do n't believe consciousness exists ( at least as anything unique to intelligence ) and I do n't believe we are as smart as we think we are .</tokentext>
<sentencetext>I don't believe consciousness exists (at least as anything unique to intelligence) and I don't believe we are as smart as we think we are.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092812</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097044</id>
	<title>Who pays their salaries?</title>
	<author>Anonymous</author>
	<datestamp>1265879700000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Imagine an AI Expert who states: "It will take years and years, maybe never, for AI to supercede humanity?" How much funding will he receive? I guess these guys had to choose a timeline that is close enough to secure their funding and far enough off to not threaten their retirement<nobr> <wbr></nobr>;-) Damn, I try to not be so negative.....</p></htmltext>
<tokenext>Imagine an AI Expert who states : " It will take years and years , maybe never , for AI to supercede humanity ?
" How much funding will he receive ?
I guess these guys had to choose a timeline that is close enough to secure their funding and far enough off to not threaten their retirement ; - ) Damn , I try to not be so negative.... .</tokentext>
<sentencetext>Imagine an AI Expert who states: "It will take years and years, maybe never, for AI to supercede humanity?
" How much funding will he receive?
I guess these guys had to choose a timeline that is close enough to secure their funding and far enough off to not threaten their retirement ;-) Damn, I try to not be so negative.....</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093818</id>
	<title>Re:Space shows</title>
	<author>Anonymous</author>
	<datestamp>1265033580000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Perfect AI is boring when you're trying to tell a story. And boring stories are bad when you're trying to entertain viewers, or sell ad space.</p><p>Maybe some of us will be put out of work sooner than others following the AI revolution<nobr> <wbr></nobr>:P</p></htmltext>
<tokenext>Perfect AI is boring when you 're trying to tell a story .
And boring stories are bad when you 're trying to entertain viewers , or sell ad space.Maybe some of us will be put out of work sooner than others following the AI revolution : P</tokentext>
<sentencetext>Perfect AI is boring when you're trying to tell a story.
And boring stories are bad when you're trying to entertain viewers, or sell ad space.Maybe some of us will be put out of work sooner than others following the AI revolution :P</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092918</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093272</id>
	<title>Obligatory:</title>
	<author>bmo</author>
	<datestamp>1265030940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>AI is bogus.</p></htmltext>
<tokenext>AI is bogus .</tokentext>
<sentencetext>AI is bogus.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093466</id>
	<title>How do you quantify "human inteligence"?</title>
	<author>kurt555gs</author>
	<datestamp>1265032140000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>If you were to use the people in any Walmart as your study group, the machines would have been winning since the Sinclair ZX 80.</p></htmltext>
<tokenext>If you were to use the people in any Walmart as your study group , the machines would have been winning since the Sinclair ZX 80 .</tokentext>
<sentencetext>If you were to use the people in any Walmart as your study group, the machines would have been winning since the Sinclair ZX 80.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094048</id>
	<title>Re:We make mistakes. We make games.</title>
	<author>Anonymous</author>
	<datestamp>1265034600000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Our role in a utopian future will be ours to define, finally we will be able to pursue our happiness to the greatest extent.</p></htmltext>
<tokenext>Our role in a utopian future will be ours to define , finally we will be able to pursue our happiness to the greatest extent .</tokentext>
<sentencetext>Our role in a utopian future will be ours to define, finally we will be able to pursue our happiness to the greatest extent.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092948</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093216</id>
	<title>I bet a computer could predit the day...</title>
	<author>Jackie\_Chan\_Fan</author>
	<datestamp>1265030640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Shh... They're thinking and they can see you jerking off.</p></htmltext>
<tokenext>Shh... They 're thinking and they can see you jerking off .</tokentext>
<sentencetext>Shh... They're thinking and they can see you jerking off.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094470</id>
	<title>Biased sample</title>
	<author>Anonymous</author>
	<datestamp>1265036520000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>This poll among so-called "AI experts" is incredibly biased: it was conducted amoing participants of the Artificial General Intelligence conference, which precisely attracts people who believe such a thing is possible within a reasonably short time.</p><p>If you asked the same question at the Neural Information Processing conference, or the International Conference on Machine Learning, or the AAAI conference, you would get a very different answer.</p></htmltext>
<tokenext>This poll among so-called " AI experts " is incredibly biased : it was conducted amoing participants of the Artificial General Intelligence conference , which precisely attracts people who believe such a thing is possible within a reasonably short time.If you asked the same question at the Neural Information Processing conference , or the International Conference on Machine Learning , or the AAAI conference , you would get a very different answer .</tokentext>
<sentencetext>This poll among so-called "AI experts" is incredibly biased: it was conducted amoing participants of the Artificial General Intelligence conference, which precisely attracts people who believe such a thing is possible within a reasonably short time.If you asked the same question at the Neural Information Processing conference, or the International Conference on Machine Learning, or the AAAI conference, you would get a very different answer.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097676</id>
	<title>Can someone explain the Turing test to me?</title>
	<author>Phase Shifter</author>
	<datestamp>1265887920000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext>I'm trying to fathom how the ability to blend in to a group of hairless monkeys spamming "ASL?" on the internet is supposed to be construed as a valid measure of intelligence.</htmltext>
<tokenext>I 'm trying to fathom how the ability to blend in to a group of hairless monkeys spamming " ASL ?
" on the internet is supposed to be construed as a valid measure of intelligence .</tokentext>
<sentencetext>I'm trying to fathom how the ability to blend in to a group of hairless monkeys spamming "ASL?
" on the internet is supposed to be construed as a valid measure of intelligence.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096494</id>
	<title>Re:Start laughing now</title>
	<author>Anonymous</author>
	<datestamp>1265052960000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Considering that the only source of intelligent behavior (that most would agree upon) is the human brain, I think it's an ok thing to philosophize about and study.</p><p>I don't know what "AI experts" you're meeting with, but as an AI researcher myself, the field seems to be the far opposite extreme of what you describe; most experts are becoming nigh-myopic in their obsession with the math/programming/engineering behind the problems. When you meet the right people in disparate sections of science, you see that there is a lot of wheel reinvention and parallel research going on which would benefit enormously from cross-pollination.</p><p>Frankly, the field is a very very broad one, and it will take all kinds to advance it. Too much abstract, hand-wavey discussion gets us nowhere, but neither does obsessing over minutiae.</p></htmltext>
<tokenext>Considering that the only source of intelligent behavior ( that most would agree upon ) is the human brain , I think it 's an ok thing to philosophize about and study.I do n't know what " AI experts " you 're meeting with , but as an AI researcher myself , the field seems to be the far opposite extreme of what you describe ; most experts are becoming nigh-myopic in their obsession with the math/programming/engineering behind the problems .
When you meet the right people in disparate sections of science , you see that there is a lot of wheel reinvention and parallel research going on which would benefit enormously from cross-pollination.Frankly , the field is a very very broad one , and it will take all kinds to advance it .
Too much abstract , hand-wavey discussion gets us nowhere , but neither does obsessing over minutiae .</tokentext>
<sentencetext>Considering that the only source of intelligent behavior (that most would agree upon) is the human brain, I think it's an ok thing to philosophize about and study.I don't know what "AI experts" you're meeting with, but as an AI researcher myself, the field seems to be the far opposite extreme of what you describe; most experts are becoming nigh-myopic in their obsession with the math/programming/engineering behind the problems.
When you meet the right people in disparate sections of science, you see that there is a lot of wheel reinvention and parallel research going on which would benefit enormously from cross-pollination.Frankly, the field is a very very broad one, and it will take all kinds to advance it.
Too much abstract, hand-wavey discussion gets us nowhere, but neither does obsessing over minutiae.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093044</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31101440</id>
	<title>Re:Depends on the test.</title>
	<author>Kenoli</author>
	<datestamp>1265913000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Using one specific type of test to measure intelligence won't work very well. The designers will simply have their "AI" built around that one task. Chess playing computers, for example, can rely heavily on sheer processing speed and vast memory. They don't really need to be 'intelligent' to beat a human player.<br>
<br>
But who cares how it works internally, right? Well, the problem is that this sort of approach limits what the AI can do. The human player can potentially get up and go do a billion other completely unrelated things, but the AI can <i>only play chess</i>.</htmltext>
<tokenext>Using one specific type of test to measure intelligence wo n't work very well .
The designers will simply have their " AI " built around that one task .
Chess playing computers , for example , can rely heavily on sheer processing speed and vast memory .
They do n't really need to be 'intelligent ' to beat a human player .
But who cares how it works internally , right ?
Well , the problem is that this sort of approach limits what the AI can do .
The human player can potentially get up and go do a billion other completely unrelated things , but the AI can only play chess .</tokentext>
<sentencetext>Using one specific type of test to measure intelligence won't work very well.
The designers will simply have their "AI" built around that one task.
Chess playing computers, for example, can rely heavily on sheer processing speed and vast memory.
They don't really need to be 'intelligent' to beat a human player.
But who cares how it works internally, right?
Well, the problem is that this sort of approach limits what the AI can do.
The human player can potentially get up and go do a billion other completely unrelated things, but the AI can only play chess.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093198</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096146</id>
	<title>Re:Definitions</title>
	<author>Anonymous</author>
	<datestamp>1265049240000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Perhaps consciousness, ingenuity, and creativity are all by products of calculation speed and memory.</p></htmltext>
<tokenext>Perhaps consciousness , ingenuity , and creativity are all by products of calculation speed and memory .</tokentext>
<sentencetext>Perhaps consciousness, ingenuity, and creativity are all by products of calculation speed and memory.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098896</id>
	<title>I studied AI in grad school</title>
	<author>Theovon</author>
	<datestamp>1265900220000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>When will AI surpass human intelligence?  As soon as we figure out how to do artificial intelligence the way popular culture conceives of it.</p><p>There are two main areas of AI research, as I see it:</p><p>(1) Engineered intelligence.  These systems learn, but they learn in carefully controlled structures, like Markov models and mapping functions in genetic algorithms.</p><p>(2) Emergent intelligence.  These are based on evolving systems of simpler structures, like neural nets, and those little cooperating robots you keep hearing about.  In some ways, since the intelligent behavior evolved over time, this is more akin to natural intelligence then artificial intelligence.</p><p>Neither group has really accomplished a hell of a lot.  Speech recognition and computer vision still suck ass.  Group (1) has been dominant since the idea of AI was developed, and frankly, they're not a millimeter closer to understanding how to build up a system that is intelligent, where you understand all the parts you built with.  Group (2) is making some progress, but then they're left with a system they don't understand because they didn't engineer it.</p><p>Dorks like Kurtzweil seem to think that as soon as we can fit as much compute power into one chip as we GUESS is in the brain, we'll magically get sentient robots.  That's bullshit.  We need software systems that learn and adapt, and we just haven't figured out how to make those.</p></htmltext>
<tokenext>When will AI surpass human intelligence ?
As soon as we figure out how to do artificial intelligence the way popular culture conceives of it.There are two main areas of AI research , as I see it : ( 1 ) Engineered intelligence .
These systems learn , but they learn in carefully controlled structures , like Markov models and mapping functions in genetic algorithms .
( 2 ) Emergent intelligence .
These are based on evolving systems of simpler structures , like neural nets , and those little cooperating robots you keep hearing about .
In some ways , since the intelligent behavior evolved over time , this is more akin to natural intelligence then artificial intelligence.Neither group has really accomplished a hell of a lot .
Speech recognition and computer vision still suck ass .
Group ( 1 ) has been dominant since the idea of AI was developed , and frankly , they 're not a millimeter closer to understanding how to build up a system that is intelligent , where you understand all the parts you built with .
Group ( 2 ) is making some progress , but then they 're left with a system they do n't understand because they did n't engineer it.Dorks like Kurtzweil seem to think that as soon as we can fit as much compute power into one chip as we GUESS is in the brain , we 'll magically get sentient robots .
That 's bullshit .
We need software systems that learn and adapt , and we just have n't figured out how to make those .</tokentext>
<sentencetext>When will AI surpass human intelligence?
As soon as we figure out how to do artificial intelligence the way popular culture conceives of it.There are two main areas of AI research, as I see it:(1) Engineered intelligence.
These systems learn, but they learn in carefully controlled structures, like Markov models and mapping functions in genetic algorithms.
(2) Emergent intelligence.
These are based on evolving systems of simpler structures, like neural nets, and those little cooperating robots you keep hearing about.
In some ways, since the intelligent behavior evolved over time, this is more akin to natural intelligence then artificial intelligence.Neither group has really accomplished a hell of a lot.
Speech recognition and computer vision still suck ass.
Group (1) has been dominant since the idea of AI was developed, and frankly, they're not a millimeter closer to understanding how to build up a system that is intelligent, where you understand all the parts you built with.
Group (2) is making some progress, but then they're left with a system they don't understand because they didn't engineer it.Dorks like Kurtzweil seem to think that as soon as we can fit as much compute power into one chip as we GUESS is in the brain, we'll magically get sentient robots.
That's bullshit.
We need software systems that learn and adapt, and we just haven't figured out how to make those.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097224</id>
	<title>AI will be *different*, not necessarily better</title>
	<author>LordZardoz</author>
	<datestamp>1265882580000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>When it comes to predicting the impact of a sentient AI on human civilization, there is never any shortage for alarmism.  I am not an expert, but I am a programmer.  And I believe three things to be true with respect to AI.</p><p>1)  Until we have a better understanding of why humans are sentient in the first place, we are probably not going to get any closer to recreating that phenomenon in a computer program.</p><p>2)  A Turing Complete AI is about as far off as the discovery of a room temperature super conductor or a form of fusion suitable for large scale power generation.  We may be close, but probably not *that* close.</p><p>3)  I seriously doubt that any AI that we are going to be able to create with anything resembling current computer technology is going to have a thought process even close to our own.</p><p>Think about it for a moment.  Human intelligence is shaped as much by our 5 senses, our capability to create and understand language, our emotions, our ability to affect our surroundings and observe those effects, and to communicate with one another as it is our capability for logic and math.  The factors that will shape an A.I. are so different as to create the possibility that a Human Intelligence and an Artificial Intelligence may not even be able to meaningfully communicate.</p><p>Will the first sentient AI be hosted on a single computer, or will it be a gestalt effect encompasing the entire internet?<br>Will the sentient AI be aware of time in anything even close to the way that we are?<br>Will the sentient AI even be capable of 'wanting' anything, given that it will have no need for sleep?<br>Will the sentient AI be able to comprehend the nature of its existence as a program, and be able to manipulate its own variables by choice?<br>Will the sentient AI fear its own termination, or not really care knowing it can easily be reloaded?</p><p>I would say that being threatened by a computer based AI that is better able to perform 'intellectual work' is about as reasonable as being threatened by cheetah's because they are better at running really goddamn fast.</p><p>I will admit that the idea of AI's eliminating paying jobs of a particular sort is an interesting problem to consider, but not that different from considering what will happen when we can create robots capable of performing all types of manual labour.  Will that result in world wide poverty, or will it result in world wide prosperity ala StarTrek?</p><p>END COMMUNICATION</p></htmltext>
<tokenext>When it comes to predicting the impact of a sentient AI on human civilization , there is never any shortage for alarmism .
I am not an expert , but I am a programmer .
And I believe three things to be true with respect to AI.1 ) Until we have a better understanding of why humans are sentient in the first place , we are probably not going to get any closer to recreating that phenomenon in a computer program.2 ) A Turing Complete AI is about as far off as the discovery of a room temperature super conductor or a form of fusion suitable for large scale power generation .
We may be close , but probably not * that * close.3 ) I seriously doubt that any AI that we are going to be able to create with anything resembling current computer technology is going to have a thought process even close to our own.Think about it for a moment .
Human intelligence is shaped as much by our 5 senses , our capability to create and understand language , our emotions , our ability to affect our surroundings and observe those effects , and to communicate with one another as it is our capability for logic and math .
The factors that will shape an A.I .
are so different as to create the possibility that a Human Intelligence and an Artificial Intelligence may not even be able to meaningfully communicate.Will the first sentient AI be hosted on a single computer , or will it be a gestalt effect encompasing the entire internet ? Will the sentient AI be aware of time in anything even close to the way that we are ? Will the sentient AI even be capable of 'wanting ' anything , given that it will have no need for sleep ? Will the sentient AI be able to comprehend the nature of its existence as a program , and be able to manipulate its own variables by choice ? Will the sentient AI fear its own termination , or not really care knowing it can easily be reloaded ? I would say that being threatened by a computer based AI that is better able to perform 'intellectual work ' is about as reasonable as being threatened by cheetah 's because they are better at running really goddamn fast.I will admit that the idea of AI 's eliminating paying jobs of a particular sort is an interesting problem to consider , but not that different from considering what will happen when we can create robots capable of performing all types of manual labour .
Will that result in world wide poverty , or will it result in world wide prosperity ala StarTrek ? END COMMUNICATION</tokentext>
<sentencetext>When it comes to predicting the impact of a sentient AI on human civilization, there is never any shortage for alarmism.
I am not an expert, but I am a programmer.
And I believe three things to be true with respect to AI.1)  Until we have a better understanding of why humans are sentient in the first place, we are probably not going to get any closer to recreating that phenomenon in a computer program.2)  A Turing Complete AI is about as far off as the discovery of a room temperature super conductor or a form of fusion suitable for large scale power generation.
We may be close, but probably not *that* close.3)  I seriously doubt that any AI that we are going to be able to create with anything resembling current computer technology is going to have a thought process even close to our own.Think about it for a moment.
Human intelligence is shaped as much by our 5 senses, our capability to create and understand language, our emotions, our ability to affect our surroundings and observe those effects, and to communicate with one another as it is our capability for logic and math.
The factors that will shape an A.I.
are so different as to create the possibility that a Human Intelligence and an Artificial Intelligence may not even be able to meaningfully communicate.Will the first sentient AI be hosted on a single computer, or will it be a gestalt effect encompasing the entire internet?Will the sentient AI be aware of time in anything even close to the way that we are?Will the sentient AI even be capable of 'wanting' anything, given that it will have no need for sleep?Will the sentient AI be able to comprehend the nature of its existence as a program, and be able to manipulate its own variables by choice?Will the sentient AI fear its own termination, or not really care knowing it can easily be reloaded?I would say that being threatened by a computer based AI that is better able to perform 'intellectual work' is about as reasonable as being threatened by cheetah's because they are better at running really goddamn fast.I will admit that the idea of AI's eliminating paying jobs of a particular sort is an interesting problem to consider, but not that different from considering what will happen when we can create robots capable of performing all types of manual labour.
Will that result in world wide poverty, or will it result in world wide prosperity ala StarTrek?END COMMUNICATION</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099680</id>
	<title>a similar study...</title>
	<author>benob</author>
	<datestamp>1265904480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>...was performed in the speech community, and it yielded somewhat incompatible results.</p><p><a href="http://www.asru2009.org/uploadedimages/talk/rkm\_talk.pdf" title="asru2009.org" rel="nofollow">http://www.asru2009.org/uploadedimages/talk/rkm\_talk.pdf</a> [asru2009.org]</p><p>They gathered year predictions on milestones like "a majority of mobile phones can translate conversations" from 127 researchers from the speech community and compared them to those of the same studiy performed 6 years ago and 12 years ago. The funny part is that the averages slide with time, as if the future was near, but unreachable. Also, "never" was a possible answers, and it often showed up with a majority of votes.</p><p>From the presentation:<br>* The future appears to be no nearer than it was previously!<br>* The level of scepticism has remained remarkably stable, but pessimism (realism?) seems to have increased</p></htmltext>
<tokenext>...was performed in the speech community , and it yielded somewhat incompatible results.http : //www.asru2009.org/uploadedimages/talk/rkm \ _talk.pdf [ asru2009.org ] They gathered year predictions on milestones like " a majority of mobile phones can translate conversations " from 127 researchers from the speech community and compared them to those of the same studiy performed 6 years ago and 12 years ago .
The funny part is that the averages slide with time , as if the future was near , but unreachable .
Also , " never " was a possible answers , and it often showed up with a majority of votes.From the presentation : * The future appears to be no nearer than it was previously !
* The level of scepticism has remained remarkably stable , but pessimism ( realism ?
) seems to have increased</tokentext>
<sentencetext>...was performed in the speech community, and it yielded somewhat incompatible results.http://www.asru2009.org/uploadedimages/talk/rkm\_talk.pdf [asru2009.org]They gathered year predictions on milestones like "a majority of mobile phones can translate conversations" from 127 researchers from the speech community and compared them to those of the same studiy performed 6 years ago and 12 years ago.
The funny part is that the averages slide with time, as if the future was near, but unreachable.
Also, "never" was a possible answers, and it often showed up with a majority of votes.From the presentation:* The future appears to be no nearer than it was previously!
* The level of scepticism has remained remarkably stable, but pessimism (realism?
) seems to have increased</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096386</id>
	<title>Re:The obvious solution</title>
	<author>Chrontius</author>
	<datestamp>1265051700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Are you crazy?  The proper second step is to install wireless hardware and a network stack allowing for zeroconfig networking and automatic synchronization to the cloud.
<br> <br>
<i>Then</i> when events <i>do</i> conspire to make you dead, you just restore from backup, having not lost anything too critical - a good analogy would be losing an arm in an age where prosthetics are just as good as the real thing but subtly different - Ghost in the Shell, for example.
<br> <br>
Also, you get to be in two places at once.</htmltext>
<tokenext>Are you crazy ?
The proper second step is to install wireless hardware and a network stack allowing for zeroconfig networking and automatic synchronization to the cloud .
Then when events do conspire to make you dead , you just restore from backup , having not lost anything too critical - a good analogy would be losing an arm in an age where prosthetics are just as good as the real thing but subtly different - Ghost in the Shell , for example .
Also , you get to be in two places at once .</tokentext>
<sentencetext>Are you crazy?
The proper second step is to install wireless hardware and a network stack allowing for zeroconfig networking and automatic synchronization to the cloud.
Then when events do conspire to make you dead, you just restore from backup, having not lost anything too critical - a good analogy would be losing an arm in an age where prosthetics are just as good as the real thing but subtly different - Ghost in the Shell, for example.
Also, you get to be in two places at once.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092788</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093208</id>
	<title>I'll tell you when</title>
	<author>macraig</author>
	<datestamp>1265030580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>"When Will AI Surpass Human Intelligence?"</p></div></blockquote><p>Precisely at the instant that people stop asking this question.  Didn't anyone ever tell you that a watched pot never boils?</p></div>
	</htmltext>
<tokenext>" When Will AI Surpass Human Intelligence ?
" Precisely at the instant that people stop asking this question .
Did n't anyone ever tell you that a watched pot never boils ?</tokentext>
<sentencetext>"When Will AI Surpass Human Intelligence?
"Precisely at the instant that people stop asking this question.
Didn't anyone ever tell you that a watched pot never boils?
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099384</id>
	<title>Re:No way.</title>
	<author>mr exploiter</author>
	<datestamp>1265902740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I'm more than dubious... I think that there is so little value in what AI experts have predicted in the past that I didn't even bother to RTFA.</p></htmltext>
<tokenext>I 'm more than dubious... I think that there is so little value in what AI experts have predicted in the past that I did n't even bother to RTFA .</tokentext>
<sentencetext>I'm more than dubious... I think that there is so little value in what AI experts have predicted in the past that I didn't even bother to RTFA.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099186</id>
	<title>Re:Space shows</title>
	<author>hawkfish</author>
	<datestamp>1265901720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>I've often thought Space shows - and any show in the future, really - are incredibly silly.</p></div><p>I mostly agree, but then I open my flip phone...</p></div>
	</htmltext>
<tokenext>I 've often thought Space shows - and any show in the future , really - are incredibly silly.I mostly agree , but then I open my flip phone.. .</tokentext>
<sentencetext>I've often thought Space shows - and any show in the future, really - are incredibly silly.I mostly agree, but then I open my flip phone...
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092918</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095042</id>
	<title>Re:Definitions</title>
	<author>ShanghaiBill</author>
	<datestamp>1265040420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>
<i>
&gt; Please define "intelligence."
</i>
</p><p>
Intelligence:  The ability to formulate an effective initial response to a novel situation.
</p><p>
<i>
&gt; analyzing every single possible permutation
</i>
</p><p>
This doesn't matter.  Intelligence depends only on behavior, not on mechanism.  If a system behaves intellgently, then it is intelligent.  But in practice, considering every permutation is almost never an effective strategy, even in litte "toy" domains, like chess.
</p><p>
<i>
&gt;Consciousness? We can barely define that
</i>
</p><p>
Actually, we can't define consiousness <b>at all</b>.  Some philosophers believe that consiousness is an illusion.  Some cultures believe even plants and rocks are concious.  Other cultures believe that animals have no conciousness.  Many people used to believe that Africans had no real conciousness, making it okay to enslave them.  Arguing about consiousness is like arguing about souls.
</p><p>
There is no good evidence that consiousness (if it exists) is either a necessary or sufficient condition for intelligence.</p></htmltext>
<tokenext>&gt; Please define " intelligence .
" Intelligence : The ability to formulate an effective initial response to a novel situation .
&gt; analyzing every single possible permutation This does n't matter .
Intelligence depends only on behavior , not on mechanism .
If a system behaves intellgently , then it is intelligent .
But in practice , considering every permutation is almost never an effective strategy , even in litte " toy " domains , like chess .
&gt; Consciousness ? We can barely define that Actually , we ca n't define consiousness at all .
Some philosophers believe that consiousness is an illusion .
Some cultures believe even plants and rocks are concious .
Other cultures believe that animals have no conciousness .
Many people used to believe that Africans had no real conciousness , making it okay to enslave them .
Arguing about consiousness is like arguing about souls .
There is no good evidence that consiousness ( if it exists ) is either a necessary or sufficient condition for intelligence .</tokentext>
<sentencetext>

&gt; Please define "intelligence.
"


Intelligence:  The ability to formulate an effective initial response to a novel situation.
&gt; analyzing every single possible permutation


This doesn't matter.
Intelligence depends only on behavior, not on mechanism.
If a system behaves intellgently, then it is intelligent.
But in practice, considering every permutation is almost never an effective strategy, even in litte "toy" domains, like chess.
&gt;Consciousness? We can barely define that


Actually, we can't define consiousness at all.
Some philosophers believe that consiousness is an illusion.
Some cultures believe even plants and rocks are concious.
Other cultures believe that animals have no conciousness.
Many people used to believe that Africans had no real conciousness, making it okay to enslave them.
Arguing about consiousness is like arguing about souls.
There is no good evidence that consiousness (if it exists) is either a necessary or sufficient condition for intelligence.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096460</id>
	<title>Human Intelligence is not Mechanistic</title>
	<author>warncke</author>
	<datestamp>1265052480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I try to say this to "AI" researchers, and they usually get annoyed.  It is very Douglas Adams.

The point is that you can't emulate a system with infinite states using a finite machine.  All you can emulate is a mechanical model of the underlying system, which is not the same thing.

Even if you emulate at the neural level, you can't emulate the infinite input array of sensory information pouring over those neurons.

It just won't work.

But if people want to keep getting checks signed, and find people dumb enough to sign them, why argue?</htmltext>
<tokenext>I try to say this to " AI " researchers , and they usually get annoyed .
It is very Douglas Adams .
The point is that you ca n't emulate a system with infinite states using a finite machine .
All you can emulate is a mechanical model of the underlying system , which is not the same thing .
Even if you emulate at the neural level , you ca n't emulate the infinite input array of sensory information pouring over those neurons .
It just wo n't work .
But if people want to keep getting checks signed , and find people dumb enough to sign them , why argue ?</tokentext>
<sentencetext>I try to say this to "AI" researchers, and they usually get annoyed.
It is very Douglas Adams.
The point is that you can't emulate a system with infinite states using a finite machine.
All you can emulate is a mechanical model of the underlying system, which is not the same thing.
Even if you emulate at the neural level, you can't emulate the infinite input array of sensory information pouring over those neurons.
It just won't work.
But if people want to keep getting checks signed, and find people dumb enough to sign them, why argue?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093384</id>
	<title>Ray will be disappointed</title>
	<author>Anonymous</author>
	<datestamp>1265031540000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>really? no Kurzweil or Singularity threads on *this* topic?</p><p>For shame!</p><p>http://en.wikipedia.org/wiki/Technological\_singularity</p><p>http://en.wikipedia.org/wiki/The\_Singularity\_is\_Near</p></htmltext>
<tokenext>really ?
no Kurzweil or Singularity threads on * this * topic ? For shame ! http : //en.wikipedia.org/wiki/Technological \ _singularityhttp : //en.wikipedia.org/wiki/The \ _Singularity \ _is \ _Near</tokentext>
<sentencetext>really?
no Kurzweil or Singularity threads on *this* topic?For shame!http://en.wikipedia.org/wiki/Technological\_singularityhttp://en.wikipedia.org/wiki/The\_Singularity\_is\_Near</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092948</id>
	<title>We make mistakes.  We make games.</title>
	<author>Anonymous</author>
	<datestamp>1265029380000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>Artificial intelligences will certainly be capable of doing a lot of work, and indeed managing those tasks to accomplish greater tasks.  Let's make a giant assumption that we find a way out of the current science fiction conundrums of control and cooperation with guided artificial intelligences... what is our role as human beings in this mostly-jobless world?</p><p>The role of the economy is to exchange the goods needed to survive and accomplish things.  When everyone can have an autofarm and manufacturing fabricator, there really wouldn't be room for a traditional economy.  A craiglist-style trading system would be about all that would be theoretically needed - most services would be interchangeable and not individually valuable.</p><p>What role will humanity play in such a system?  We'd still have personality, and our own perspective that couldn't be had by live-by-copy intelligent digital software (until true brain scans become possible).  We'd be able to write, have time to create elaborate simulations (with ever-improving toolsets), and expand the human exploration of experience in general.</p><p>As humans, the way we best grow is by making mistakes, and finding a way to use that.  It's how we write better software, solve difficult problems, create great art, and even generate industries.  It's our hidden talent.  Games are our way of making such mistakes safe, and even more fun - and I see games and stories as increasingly big parts of our exploration of the reality we control.</p><p>Optimized software can also learn from its mistakes in a way - but it takes the accumulated mistakes on a scale only a human can make to get something really interesting.  We simply wouldn't trust software to make that many mistakes.</p><p>Ryan Fenton</p></htmltext>
<tokenext>Artificial intelligences will certainly be capable of doing a lot of work , and indeed managing those tasks to accomplish greater tasks .
Let 's make a giant assumption that we find a way out of the current science fiction conundrums of control and cooperation with guided artificial intelligences... what is our role as human beings in this mostly-jobless world ? The role of the economy is to exchange the goods needed to survive and accomplish things .
When everyone can have an autofarm and manufacturing fabricator , there really would n't be room for a traditional economy .
A craiglist-style trading system would be about all that would be theoretically needed - most services would be interchangeable and not individually valuable.What role will humanity play in such a system ?
We 'd still have personality , and our own perspective that could n't be had by live-by-copy intelligent digital software ( until true brain scans become possible ) .
We 'd be able to write , have time to create elaborate simulations ( with ever-improving toolsets ) , and expand the human exploration of experience in general.As humans , the way we best grow is by making mistakes , and finding a way to use that .
It 's how we write better software , solve difficult problems , create great art , and even generate industries .
It 's our hidden talent .
Games are our way of making such mistakes safe , and even more fun - and I see games and stories as increasingly big parts of our exploration of the reality we control.Optimized software can also learn from its mistakes in a way - but it takes the accumulated mistakes on a scale only a human can make to get something really interesting .
We simply would n't trust software to make that many mistakes.Ryan Fenton</tokentext>
<sentencetext>Artificial intelligences will certainly be capable of doing a lot of work, and indeed managing those tasks to accomplish greater tasks.
Let's make a giant assumption that we find a way out of the current science fiction conundrums of control and cooperation with guided artificial intelligences... what is our role as human beings in this mostly-jobless world?The role of the economy is to exchange the goods needed to survive and accomplish things.
When everyone can have an autofarm and manufacturing fabricator, there really wouldn't be room for a traditional economy.
A craiglist-style trading system would be about all that would be theoretically needed - most services would be interchangeable and not individually valuable.What role will humanity play in such a system?
We'd still have personality, and our own perspective that couldn't be had by live-by-copy intelligent digital software (until true brain scans become possible).
We'd be able to write, have time to create elaborate simulations (with ever-improving toolsets), and expand the human exploration of experience in general.As humans, the way we best grow is by making mistakes, and finding a way to use that.
It's how we write better software, solve difficult problems, create great art, and even generate industries.
It's our hidden talent.
Games are our way of making such mistakes safe, and even more fun - and I see games and stories as increasingly big parts of our exploration of the reality we control.Optimized software can also learn from its mistakes in a way - but it takes the accumulated mistakes on a scale only a human can make to get something really interesting.
We simply wouldn't trust software to make that many mistakes.Ryan Fenton</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095550</id>
	<title>Re:The obvious solution</title>
	<author>shentino</author>
	<datestamp>1265044320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Won't work.</p><p>The brain relies on quantum mechanics to do its things, which is uncopyable state.</p><p>Not to mention that a LIVE brain is a BUSY brain, with nerve pulses going everywhere.</p></htmltext>
<tokenext>Wo n't work.The brain relies on quantum mechanics to do its things , which is uncopyable state.Not to mention that a LIVE brain is a BUSY brain , with nerve pulses going everywhere .</tokentext>
<sentencetext>Won't work.The brain relies on quantum mechanics to do its things, which is uncopyable state.Not to mention that a LIVE brain is a BUSY brain, with nerve pulses going everywhere.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092788</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31100748</id>
	<title>Re:This touches on a problem I have</title>
	<author>Xanator</author>
	<datestamp>1265909160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>
<br>
actually you don't even need a robot to replace some menial work.....
<br>
A lot of works could be replaced by a simple script, since most work are just some modification or a repeating task
<br> <br>
We are investigating robots in order to reproduce the hardwork that requires thinking, so that the discovery and generation of knowledge becomes faster and of better quality, replacing the human work, will only be a happy consequence</htmltext>
<tokenext>actually you do n't even need a robot to replace some menial work.... . A lot of works could be replaced by a simple script , since most work are just some modification or a repeating task We are investigating robots in order to reproduce the hardwork that requires thinking , so that the discovery and generation of knowledge becomes faster and of better quality , replacing the human work , will only be a happy consequence</tokentext>
<sentencetext>

actually you don't even need a robot to replace some menial work.....

A lot of works could be replaced by a simple script, since most work are just some modification or a repeating task
 
We are investigating robots in order to reproduce the hardwork that requires thinking, so that the discovery and generation of knowledge becomes faster and of better quality, replacing the human work, will only be a happy consequence</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092968</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31103452</id>
	<title>Re:No way.</title>
	<author>elrous0</author>
	<datestamp>1265920440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>It's funny, not long ago I was watching the 1984 movie <a href="http://en.wikipedia.org/wiki/Runaway\_(1984\_film)" title="wikipedia.org">Runaway</a> [wikipedia.org], where it was assumed that we were just a few years away from robots intelligent enough to babysit our kids. Cut to 25 years later and my Roomba still gets stuck in corners and can't even climb a stair.</htmltext>
<tokenext>It 's funny , not long ago I was watching the 1984 movie Runaway [ wikipedia.org ] , where it was assumed that we were just a few years away from robots intelligent enough to babysit our kids .
Cut to 25 years later and my Roomba still gets stuck in corners and ca n't even climb a stair .</tokentext>
<sentencetext>It's funny, not long ago I was watching the 1984 movie Runaway [wikipedia.org], where it was assumed that we were just a few years away from robots intelligent enough to babysit our kids.
Cut to 25 years later and my Roomba still gets stuck in corners and can't even climb a stair.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094608</id>
	<title>Re:No way.</title>
	<author>cptdondo</author>
	<datestamp>1265037540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>When I was in college (and a fine Ivy League school it was), AI was 10 years away according to my profs, who were the leading authorities in teh field.  Now, 30 years later, it's 20 years away.  Not sure what the progression is, but I'd say in 20 years it will be 30 years away.</p><p>I'm not holding my breath on this one.  We'll make machines that can mimic most basic human behavior - like those bloody stupid phone switchboards - but an actual creative and independently thinking intelligence is a long, long way off.</p></htmltext>
<tokenext>When I was in college ( and a fine Ivy League school it was ) , AI was 10 years away according to my profs , who were the leading authorities in teh field .
Now , 30 years later , it 's 20 years away .
Not sure what the progression is , but I 'd say in 20 years it will be 30 years away.I 'm not holding my breath on this one .
We 'll make machines that can mimic most basic human behavior - like those bloody stupid phone switchboards - but an actual creative and independently thinking intelligence is a long , long way off .</tokentext>
<sentencetext>When I was in college (and a fine Ivy League school it was), AI was 10 years away according to my profs, who were the leading authorities in teh field.
Now, 30 years later, it's 20 years away.
Not sure what the progression is, but I'd say in 20 years it will be 30 years away.I'm not holding my breath on this one.
We'll make machines that can mimic most basic human behavior - like those bloody stupid phone switchboards - but an actual creative and independently thinking intelligence is a long, long way off.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092956</id>
	<title>Skewed sample</title>
	<author>Homburg</author>
	<datestamp>1265029380000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>The problem is, this isn't a survey of "AI experts," it's a survey of participants in the <a href="http://agi-conf.org/2010/call-for-papers/" title="agi-conf.org">Artificial General Intelligence conference</a> [agi-conf.org]. As far as I can see, this is a conference populated by the few remaining holdouts who believe that creating human-like, or human-equivalent, AIs, is a tractable or interesting problem; most AI research now is oriented towards much more specific aspects of intelligence. So this is a poll of a subset of AI researchers who have self-selected along the lines that they think human-equivalent AI is plausible in the near-ish future; it's hardly surprising, then, that the results show that many of them do in fact believe human-equivalent AI is plausible in the near-ish future.</p><p>I would be much more interested in a wider poll of AI researchers; I highly doubt anything like as many would predict nobel-prize-winning AIs in 10-20 years, or even ever. TFA itself reports a survey of AI researchers in 2006, in which 41\% said they thought human-equivalent AI would never be produces, and another 41\% said they thought it would take 50 years to produce such a thing.</p></htmltext>
<tokenext>The problem is , this is n't a survey of " AI experts , " it 's a survey of participants in the Artificial General Intelligence conference [ agi-conf.org ] .
As far as I can see , this is a conference populated by the few remaining holdouts who believe that creating human-like , or human-equivalent , AIs , is a tractable or interesting problem ; most AI research now is oriented towards much more specific aspects of intelligence .
So this is a poll of a subset of AI researchers who have self-selected along the lines that they think human-equivalent AI is plausible in the near-ish future ; it 's hardly surprising , then , that the results show that many of them do in fact believe human-equivalent AI is plausible in the near-ish future.I would be much more interested in a wider poll of AI researchers ; I highly doubt anything like as many would predict nobel-prize-winning AIs in 10-20 years , or even ever .
TFA itself reports a survey of AI researchers in 2006 , in which 41 \ % said they thought human-equivalent AI would never be produces , and another 41 \ % said they thought it would take 50 years to produce such a thing .</tokentext>
<sentencetext>The problem is, this isn't a survey of "AI experts," it's a survey of participants in the Artificial General Intelligence conference [agi-conf.org].
As far as I can see, this is a conference populated by the few remaining holdouts who believe that creating human-like, or human-equivalent, AIs, is a tractable or interesting problem; most AI research now is oriented towards much more specific aspects of intelligence.
So this is a poll of a subset of AI researchers who have self-selected along the lines that they think human-equivalent AI is plausible in the near-ish future; it's hardly surprising, then, that the results show that many of them do in fact believe human-equivalent AI is plausible in the near-ish future.I would be much more interested in a wider poll of AI researchers; I highly doubt anything like as many would predict nobel-prize-winning AIs in 10-20 years, or even ever.
TFA itself reports a survey of AI researchers in 2006, in which 41\% said they thought human-equivalent AI would never be produces, and another 41\% said they thought it would take 50 years to produce such a thing.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096360</id>
	<title>Easy answer.</title>
	<author>Anonymous</author>
	<datestamp>1265051400000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Never.</p></htmltext>
<tokenext>Never .</tokentext>
<sentencetext>Never.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097858</id>
	<title>Re:No way.</title>
	<author>Yvanhoe</author>
	<datestamp>1265889960000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>The technology doesn't go from zero. Just see how<nobr> <wbr></nobr>:<br>
- Google translate web pages and correct erroneous entries<br>
- Microsoft Word spots grammatical mistakes<br>
- Theorem provers are used frequently<br>
- package managers' ability to maintain interacting packets in a correct version<br>
<br>
About what consciousness is, the progress made have been overwhelming but the media don't like this kind of deep issue and don't make a lot of articles on what we know about it (mainly : it is a psychological construct, nothing more, as can be seen throurgh its many dysfunctions. No magics there, sorry) <br>
Our understandings and mimicking of the processes of learning, of visual conceptualization, of spatial sense, of semantical links, all become better. Granted, it went slower than expected, but it is a steady progress and the presence of a threshold where it becomes exponentially faster when you can "make" exponentially more "minds" to work on the problem seems quite logical.</htmltext>
<tokenext>The technology does n't go from zero .
Just see how : - Google translate web pages and correct erroneous entries - Microsoft Word spots grammatical mistakes - Theorem provers are used frequently - package managers ' ability to maintain interacting packets in a correct version About what consciousness is , the progress made have been overwhelming but the media do n't like this kind of deep issue and do n't make a lot of articles on what we know about it ( mainly : it is a psychological construct , nothing more , as can be seen throurgh its many dysfunctions .
No magics there , sorry ) Our understandings and mimicking of the processes of learning , of visual conceptualization , of spatial sense , of semantical links , all become better .
Granted , it went slower than expected , but it is a steady progress and the presence of a threshold where it becomes exponentially faster when you can " make " exponentially more " minds " to work on the problem seems quite logical .</tokentext>
<sentencetext>The technology doesn't go from zero.
Just see how :
- Google translate web pages and correct erroneous entries
- Microsoft Word spots grammatical mistakes
- Theorem provers are used frequently
- package managers' ability to maintain interacting packets in a correct version

About what consciousness is, the progress made have been overwhelming but the media don't like this kind of deep issue and don't make a lot of articles on what we know about it (mainly : it is a psychological construct, nothing more, as can be seen throurgh its many dysfunctions.
No magics there, sorry) 
Our understandings and mimicking of the processes of learning, of visual conceptualization, of spatial sense, of semantical links, all become better.
Granted, it went slower than expected, but it is a steady progress and the presence of a threshold where it becomes exponentially faster when you can "make" exponentially more "minds" to work on the problem seems quite logical.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096038</id>
	<title>Re:We make mistakes. We make games.</title>
	<author>winwar</author>
	<datestamp>1265048220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>"When everyone can have an autofarm and manufacturing fabricator, there really wouldn't be room for a traditional economy."</p><p>Really?  Where exactly would the resources come from?  The plans?  The expertise?</p><p>You might be able to build a house or a car but you still need the materials and the plans and the machine.  And I suspect that those will cost money.  Which means that you will need a job.</p></htmltext>
<tokenext>" When everyone can have an autofarm and manufacturing fabricator , there really would n't be room for a traditional economy. " Really ?
Where exactly would the resources come from ?
The plans ?
The expertise ? You might be able to build a house or a car but you still need the materials and the plans and the machine .
And I suspect that those will cost money .
Which means that you will need a job .</tokentext>
<sentencetext>"When everyone can have an autofarm and manufacturing fabricator, there really wouldn't be room for a traditional economy."Really?
Where exactly would the resources come from?
The plans?
The expertise?You might be able to build a house or a car but you still need the materials and the plans and the machine.
And I suspect that those will cost money.
Which means that you will need a job.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092948</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093234</id>
	<title>Re:No way.</title>
	<author>Anonymous</author>
	<datestamp>1265030700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I used to spend a lot of time thinking about consciousness, and ended up having a great conversation with a friend one evening about this topic.  Here's the problem: it takes about 20 years for a human brain to learn enough to get to a baseline level of knowledge to start learning something specialized.  Even if we have an AI in 20 years that has the capability of the human brain, we're years from there at being able to exploit it.</p><p>There's another strange issue with this sort of problem.  Let's say Moore's Law continues at that point.  If I start something today that'll take 20 years, but in 18 months it'll take 10 years, why bother?</p><p>That's not even getting into the scary issues of attempting to control something that's smarter than all humans.</p></htmltext>
<tokenext>I used to spend a lot of time thinking about consciousness , and ended up having a great conversation with a friend one evening about this topic .
Here 's the problem : it takes about 20 years for a human brain to learn enough to get to a baseline level of knowledge to start learning something specialized .
Even if we have an AI in 20 years that has the capability of the human brain , we 're years from there at being able to exploit it.There 's another strange issue with this sort of problem .
Let 's say Moore 's Law continues at that point .
If I start something today that 'll take 20 years , but in 18 months it 'll take 10 years , why bother ? That 's not even getting into the scary issues of attempting to control something that 's smarter than all humans .</tokentext>
<sentencetext>I used to spend a lot of time thinking about consciousness, and ended up having a great conversation with a friend one evening about this topic.
Here's the problem: it takes about 20 years for a human brain to learn enough to get to a baseline level of knowledge to start learning something specialized.
Even if we have an AI in 20 years that has the capability of the human brain, we're years from there at being able to exploit it.There's another strange issue with this sort of problem.
Let's say Moore's Law continues at that point.
If I start something today that'll take 20 years, but in 18 months it'll take 10 years, why bother?That's not even getting into the scary issues of attempting to control something that's smarter than all humans.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093474</id>
	<title>Silly humans.</title>
	<author>Gregg Alan</author>
	<datestamp>1265032140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I already have. Q.E.D.</p></htmltext>
<tokenext>I already have .
Q.E.D .</tokentext>
<sentencetext>I already have.
Q.E.D.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093670</id>
	<title>Re:What is AI anyway?</title>
	<author>Anonymous</author>
	<datestamp>1265032980000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Can a human pick a "truly random" number?</p><p>IF the human brain is "simply" a machine, then mimicking that machine's behaviour is most definitely possible (sort of what it means for something to be a machine).  If it's not a machine, then the only other explanation is that it's "magical."  If you believe that AI can never be achieved, then you must believe that there is something "magical" about the human brain.</p><p>Most of the arguments I've seen denying the possibility of true AI generally boil down to something along the lines of computers missing that "something special" (intelligence - whatever the hell that means) that we humans are imbued with.  Of course, no one can really define what that means...</p></htmltext>
<tokenext>Can a human pick a " truly random " number ? IF the human brain is " simply " a machine , then mimicking that machine 's behaviour is most definitely possible ( sort of what it means for something to be a machine ) .
If it 's not a machine , then the only other explanation is that it 's " magical .
" If you believe that AI can never be achieved , then you must believe that there is something " magical " about the human brain.Most of the arguments I 've seen denying the possibility of true AI generally boil down to something along the lines of computers missing that " something special " ( intelligence - whatever the hell that means ) that we humans are imbued with .
Of course , no one can really define what that means.. .</tokentext>
<sentencetext>Can a human pick a "truly random" number?IF the human brain is "simply" a machine, then mimicking that machine's behaviour is most definitely possible (sort of what it means for something to be a machine).
If it's not a machine, then the only other explanation is that it's "magical.
"  If you believe that AI can never be achieved, then you must believe that there is something "magical" about the human brain.Most of the arguments I've seen denying the possibility of true AI generally boil down to something along the lines of computers missing that "something special" (intelligence - whatever the hell that means) that we humans are imbued with.
Of course, no one can really define what that means...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094202</id>
	<title>AI is just around the corner, and always will be</title>
	<author>mbone</author>
	<datestamp>1265035260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>AI experts have a <b>really</b> poor track record at prediction.  I can, for example, remember Marvin Minsky in 1973 talking about how true AI would just require a fairly modest increase in computer power, and should occur within one or two decades. He also said that achieving AI would lead to a general understanding of human intelligence. AI is littered with such confident predictions, starting in the 1950's if not earlier, which  never seem to come to pass. With that track record I wouldn't give any weight to any new predictions.</p><p>BTW, I personally think that the Eliza program passed the Turning test  in a limited area (in that it could fool some of the people, some of the time), and (given its effectiveness and its simplicity) haven't felt that there is any real scientific interest in the Turing test since.</p></htmltext>
<tokenext>AI experts have a really poor track record at prediction .
I can , for example , remember Marvin Minsky in 1973 talking about how true AI would just require a fairly modest increase in computer power , and should occur within one or two decades .
He also said that achieving AI would lead to a general understanding of human intelligence .
AI is littered with such confident predictions , starting in the 1950 's if not earlier , which never seem to come to pass .
With that track record I would n't give any weight to any new predictions.BTW , I personally think that the Eliza program passed the Turning test in a limited area ( in that it could fool some of the people , some of the time ) , and ( given its effectiveness and its simplicity ) have n't felt that there is any real scientific interest in the Turing test since .</tokentext>
<sentencetext>AI experts have a really poor track record at prediction.
I can, for example, remember Marvin Minsky in 1973 talking about how true AI would just require a fairly modest increase in computer power, and should occur within one or two decades.
He also said that achieving AI would lead to a general understanding of human intelligence.
AI is littered with such confident predictions, starting in the 1950's if not earlier, which  never seem to come to pass.
With that track record I wouldn't give any weight to any new predictions.BTW, I personally think that the Eliza program passed the Turning test  in a limited area (in that it could fool some of the people, some of the time), and (given its effectiveness and its simplicity) haven't felt that there is any real scientific interest in the Turing test since.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31101048</id>
	<title>Re:The obvious solution</title>
	<author>Anonymous</author>
	<datestamp>1265910900000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Statement: Interesting choice of words, <a href="http://starwars.wikia.com/wiki/HK-47" title="wikia.com" rel="nofollow">meatbag</a> [wikia.com].</p></htmltext>
<tokenext>Statement : Interesting choice of words , meatbag [ wikia.com ] .</tokentext>
<sentencetext>Statement: Interesting choice of words, meatbag [wikia.com].</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092788</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093152</id>
	<title>Re:No way.</title>
	<author>Anonymous</author>
	<datestamp>1265030340000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Combine sensory input (video, tactile, audible), referential statistical modeling, massive parallel computation, against looping tumblers of self-modifying/improving code, add a random check variable for uncertainty, and you will eventually get AI.</p></htmltext>
<tokenext>Combine sensory input ( video , tactile , audible ) , referential statistical modeling , massive parallel computation , against looping tumblers of self-modifying/improving code , add a random check variable for uncertainty , and you will eventually get AI .</tokentext>
<sentencetext>Combine sensory input (video, tactile, audible), referential statistical modeling, massive parallel computation, against looping tumblers of self-modifying/improving code, add a random check variable for uncertainty, and you will eventually get AI.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31103692</id>
	<title>Their numbers are off imo</title>
	<author>thetoadwarrior</author>
	<datestamp>1265921280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I just don't think we're that close yet. I do think it is very possible to make a computer that can think as well as a human and a computer has the distinct advantage of not being emotional which will make it superior. But I just don't think we're anywhere near that yet. Humans still make far too many mistakes to perfect AI.</htmltext>
<tokenext>I just do n't think we 're that close yet .
I do think it is very possible to make a computer that can think as well as a human and a computer has the distinct advantage of not being emotional which will make it superior .
But I just do n't think we 're anywhere near that yet .
Humans still make far too many mistakes to perfect AI .</tokentext>
<sentencetext>I just don't think we're that close yet.
I do think it is very possible to make a computer that can think as well as a human and a computer has the distinct advantage of not being emotional which will make it superior.
But I just don't think we're anywhere near that yet.
Humans still make far too many mistakes to perfect AI.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31110374</id>
	<title>Re:What is AI anyway?</title>
	<author>FiloEleven</author>
	<datestamp>1266007680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I never took statistics, so your talk of Chi^2 is (wait for it) Greek to me, but I do know this: your sequence looks less like the results of chance than either of the other two.  At no point does your sequence have more than 2 of the same digit appearing in a row (run of 2), while both of the other methods do.  If I'm interpreting your statistical analysis correctly, you looked only at the distribution and not the actual sequence, which is important: if any of the sequences had its numbers arranged from 1 to 6 its randomness would be highly suspect even if its distribution was consistent with randomness.  The same is true for the opposite of having no adjacent identical numbers given the small pool you're working with.  For your sample size I estimate that one run of 3 is probable, and certainly more than three runs of 2--see how the physical and virtual dice each have six of those?</p><p>I learned about this from my high school physics course.  Our teacher gave us homework consisting of performing 100 coin tosses and turning in the results.  He then proceeded to look at each paper and accurately tell if the student actually performed the experiment or merely wrote a "random" series of H's and T's.  A similar experiment is detailed in the first third of <a href="http://www.wnyc.org/shows/radiolab/episodes/2009/09/11" title="wnyc.org">this</a> [wnyc.org] radio program: the experimenter had one group toss a coin and the other simply make theirs up, and she didn't know which was doing what.  She said that the way she found who performed the coin tosses was to first look for a run of 4 or more.  I'm sure it gets more complicated if she didn't see one in either group, but the point is that the order matters and randomness <em>looks</em> less random than you'd think--in 100 coin tosses the chance that you'll get 7 of the same face in a row is one in six, which is more likely than it feels like it ought to be.</p><p>Randomness is tricky!</p></htmltext>
<tokenext>I never took statistics , so your talk of Chi ^ 2 is ( wait for it ) Greek to me , but I do know this : your sequence looks less like the results of chance than either of the other two .
At no point does your sequence have more than 2 of the same digit appearing in a row ( run of 2 ) , while both of the other methods do .
If I 'm interpreting your statistical analysis correctly , you looked only at the distribution and not the actual sequence , which is important : if any of the sequences had its numbers arranged from 1 to 6 its randomness would be highly suspect even if its distribution was consistent with randomness .
The same is true for the opposite of having no adjacent identical numbers given the small pool you 're working with .
For your sample size I estimate that one run of 3 is probable , and certainly more than three runs of 2--see how the physical and virtual dice each have six of those ? I learned about this from my high school physics course .
Our teacher gave us homework consisting of performing 100 coin tosses and turning in the results .
He then proceeded to look at each paper and accurately tell if the student actually performed the experiment or merely wrote a " random " series of H 's and T 's .
A similar experiment is detailed in the first third of this [ wnyc.org ] radio program : the experimenter had one group toss a coin and the other simply make theirs up , and she did n't know which was doing what .
She said that the way she found who performed the coin tosses was to first look for a run of 4 or more .
I 'm sure it gets more complicated if she did n't see one in either group , but the point is that the order matters and randomness looks less random than you 'd think--in 100 coin tosses the chance that you 'll get 7 of the same face in a row is one in six , which is more likely than it feels like it ought to be.Randomness is tricky !</tokentext>
<sentencetext>I never took statistics, so your talk of Chi^2 is (wait for it) Greek to me, but I do know this: your sequence looks less like the results of chance than either of the other two.
At no point does your sequence have more than 2 of the same digit appearing in a row (run of 2), while both of the other methods do.
If I'm interpreting your statistical analysis correctly, you looked only at the distribution and not the actual sequence, which is important: if any of the sequences had its numbers arranged from 1 to 6 its randomness would be highly suspect even if its distribution was consistent with randomness.
The same is true for the opposite of having no adjacent identical numbers given the small pool you're working with.
For your sample size I estimate that one run of 3 is probable, and certainly more than three runs of 2--see how the physical and virtual dice each have six of those?I learned about this from my high school physics course.
Our teacher gave us homework consisting of performing 100 coin tosses and turning in the results.
He then proceeded to look at each paper and accurately tell if the student actually performed the experiment or merely wrote a "random" series of H's and T's.
A similar experiment is detailed in the first third of this [wnyc.org] radio program: the experimenter had one group toss a coin and the other simply make theirs up, and she didn't know which was doing what.
She said that the way she found who performed the coin tosses was to first look for a run of 4 or more.
I'm sure it gets more complicated if she didn't see one in either group, but the point is that the order matters and randomness looks less random than you'd think--in 100 coin tosses the chance that you'll get 7 of the same face in a row is one in six, which is more likely than it feels like it ought to be.Randomness is tricky!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094264</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097272</id>
	<title>Re:This assertion lacks intelligence</title>
	<author>noname444</author>
	<datestamp>1265883360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>It doesn't matter how many transistors you throw at 'artificial' intelligence, it's still just that: artificial. It has no intelligence, just as it has no life.</p></div><p>Artificial here means man-made or unnatural. It doesn't mean "not real" as you seem to be implying (that would be virtual intelligence). Your views seem to be more religious than scientific in nature.</p><p>It's all about defining intelligence. If you define intelligence in this context as "a human", then a machine of course can't be intelligent. I'd argue though that if a machine could perfectly simulate all aspects of human intelligence, it would in fact be intelligent.</p></div>
	</htmltext>
<tokenext>It does n't matter how many transistors you throw at 'artificial ' intelligence , it 's still just that : artificial .
It has no intelligence , just as it has no life.Artificial here means man-made or unnatural .
It does n't mean " not real " as you seem to be implying ( that would be virtual intelligence ) .
Your views seem to be more religious than scientific in nature.It 's all about defining intelligence .
If you define intelligence in this context as " a human " , then a machine of course ca n't be intelligent .
I 'd argue though that if a machine could perfectly simulate all aspects of human intelligence , it would in fact be intelligent .</tokentext>
<sentencetext>It doesn't matter how many transistors you throw at 'artificial' intelligence, it's still just that: artificial.
It has no intelligence, just as it has no life.Artificial here means man-made or unnatural.
It doesn't mean "not real" as you seem to be implying (that would be virtual intelligence).
Your views seem to be more religious than scientific in nature.It's all about defining intelligence.
If you define intelligence in this context as "a human", then a machine of course can't be intelligent.
I'd argue though that if a machine could perfectly simulate all aspects of human intelligence, it would in fact be intelligent.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093402</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093418</id>
	<title>Better Chart</title>
	<author>darthdavid</author>
	<datestamp>1265031780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Their chart was not very good. Here's their data in a more sensible layout...
<a href="http://spreadsheets.google.com/pub?key=tOpeMuKeVXb\_NwKGdLU3rRg&amp;output=html" title="google.com">http://spreadsheets.google.com/pub?key=tOpeMuKeVXb\_NwKGdLU3rRg&amp;output=html</a> [google.com]</htmltext>
<tokenext>Their chart was not very good .
Here 's their data in a more sensible layout.. . http : //spreadsheets.google.com/pub ? key = tOpeMuKeVXb \ _NwKGdLU3rRg&amp;output = html [ google.com ]</tokentext>
<sentencetext>Their chart was not very good.
Here's their data in a more sensible layout...
http://spreadsheets.google.com/pub?key=tOpeMuKeVXb\_NwKGdLU3rRg&amp;output=html [google.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094116</id>
	<title>Religion</title>
	<author>Anonymous</author>
	<datestamp>1265034900000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>An AI that doesn't beleive in dieties (or other superstitions) is already smarter than most of Humanity...</p></htmltext>
<tokenext>An AI that does n't beleive in dieties ( or other superstitions ) is already smarter than most of Humanity.. .</tokentext>
<sentencetext>An AI that doesn't beleive in dieties (or other superstitions) is already smarter than most of Humanity...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092908</id>
	<title>Not serious</title>
	<author>Anonymous</author>
	<datestamp>1265029200000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>Should we remember all of the unrealized promises of AI from the 1950's? What makes anyone believe in these baseless claims? If anything, in 20 years they'll give us a better spam filter. Give me a break..</p></htmltext>
<tokenext>Should we remember all of the unrealized promises of AI from the 1950 's ?
What makes anyone believe in these baseless claims ?
If anything , in 20 years they 'll give us a better spam filter .
Give me a break. .</tokentext>
<sentencetext>Should we remember all of the unrealized promises of AI from the 1950's?
What makes anyone believe in these baseless claims?
If anything, in 20 years they'll give us a better spam filter.
Give me a break..</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31113952</id>
	<title>Re:No way.</title>
	<author>Tablizer</author>
	<datestamp>1265994240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>Babbage didn't get his system built</p></div></blockquote><p>He never released Duke Shootem 1.0?<br>
&nbsp; &nbsp; &nbsp;</p></div>
	</htmltext>
<tokenext>Babbage did n't get his system builtHe never released Duke Shootem 1.0 ?
     </tokentext>
<sentencetext>Babbage didn't get his system builtHe never released Duke Shootem 1.0?
     
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094192</id>
	<title>ETA: Real Soon Now</title>
	<author>Tumbleweed</author>
	<datestamp>1265035200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Answer: Shortly after Google owns the fiber to everyone's home.</p><p>"Everything is proceeding as Google has foreseen."</p><p>"Your lust for bandwidth is your weakness."<br>"Your faith in Google is yours."</p><p>"Give IN to the bandwidth!"</p><p>That's it, man, I'm moving to Planet Ten!</p></htmltext>
<tokenext>Answer : Shortly after Google owns the fiber to everyone 's home .
" Everything is proceeding as Google has foreseen .
" " Your lust for bandwidth is your weakness .
" " Your faith in Google is yours .
" " Give IN to the bandwidth !
" That 's it , man , I 'm moving to Planet Ten !</tokentext>
<sentencetext>Answer: Shortly after Google owns the fiber to everyone's home.
"Everything is proceeding as Google has foreseen.
""Your lust for bandwidth is your weakness.
""Your faith in Google is yours.
""Give IN to the bandwidth!
"That's it, man, I'm moving to Planet Ten!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094528</id>
	<title>I fear it happened already,</title>
	<author>golden age villain</author>
	<datestamp>1265036940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>even in the poor state of AI nowadays. Just look at the story above this one on the front page (the one about South Carolina).</htmltext>
<tokenext>even in the poor state of AI nowadays .
Just look at the story above this one on the front page ( the one about South Carolina ) .</tokentext>
<sentencetext>even in the poor state of AI nowadays.
Just look at the story above this one on the front page (the one about South Carolina).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093142</id>
	<title>Must stop the future from happening....</title>
	<author>Anonymous</author>
	<datestamp>1265030280000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>'Everyone back in the pile!'</htmltext>
<tokenext>'Everyone back in the pile !
'</tokentext>
<sentencetext>'Everyone back in the pile!
'</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095616</id>
	<title>Re:Definitions</title>
	<author>shentino</author>
	<datestamp>1265044860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>AIs and humans have one thing in common, they need taught.</p><p>No human can do that right off the bat.</p><p>Of course it's a stretch to expect AIs to surpass ADULT humans that have already gone through at least 12 years of education.</p></htmltext>
<tokenext>AIs and humans have one thing in common , they need taught.No human can do that right off the bat.Of course it 's a stretch to expect AIs to surpass ADULT humans that have already gone through at least 12 years of education .</tokentext>
<sentencetext>AIs and humans have one thing in common, they need taught.No human can do that right off the bat.Of course it's a stretch to expect AIs to surpass ADULT humans that have already gone through at least 12 years of education.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093890</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092822</id>
	<title>What do super-intelligent robots think about?</title>
	<author>Anonymous</author>
	<datestamp>1265028720000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>Entropy. The problem for (potentially) immortal beings is always going to be entropy. Given, we created robots, I'm not necessarily of the belief that robots wouldn't insist we stay around for our very brief lives, so help them solve their problems.</p></htmltext>
<tokenext>Entropy .
The problem for ( potentially ) immortal beings is always going to be entropy .
Given , we created robots , I 'm not necessarily of the belief that robots would n't insist we stay around for our very brief lives , so help them solve their problems .</tokentext>
<sentencetext>Entropy.
The problem for (potentially) immortal beings is always going to be entropy.
Given, we created robots, I'm not necessarily of the belief that robots wouldn't insist we stay around for our very brief lives, so help them solve their problems.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094150</id>
	<title>Re:What is AI anyway?</title>
	<author>Anonymous</author>
	<datestamp>1265035080000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>has anyone done any studies asking humans to pick a single "random" number (not giving any bounds).  Maybe a high percentage would pick "1" or "59" or at least just stay with 1 to 100 .  I'm betting a high percentage wouldn't pick a negative number one ones with decimals....maybe pi would get chosen often by the smartyasses....maybe we can't reliably pick random numbers...</p></htmltext>
<tokenext>has anyone done any studies asking humans to pick a single " random " number ( not giving any bounds ) .
Maybe a high percentage would pick " 1 " or " 59 " or at least just stay with 1 to 100 .
I 'm betting a high percentage would n't pick a negative number one ones with decimals....maybe pi would get chosen often by the smartyasses....maybe we ca n't reliably pick random numbers.. .</tokentext>
<sentencetext>has anyone done any studies asking humans to pick a single "random" number (not giving any bounds).
Maybe a high percentage would pick "1" or "59" or at least just stay with 1 to 100 .
I'm betting a high percentage wouldn't pick a negative number one ones with decimals....maybe pi would get chosen often by the smartyasses....maybe we can't reliably pick random numbers...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096422</id>
	<title>Speaking As An Obvious Amateur.....</title>
	<author>DynaSoar</author>
	<datestamp>1265052180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I'm wondering of these wizards of AI have any words of wisdom they can pass along regarding the definition of intelligence. If machines are going to surpass humans as thy all agree will happen sooner or later, certainly they have some objective definition or measurement of the construct besides mere emulation.</p><p>Me, I'm just a neuroscientist with a background in cognitive psychology. Like nearly every one of my colleagues on either the practical or theoretical sides of the table, I have opinions on the subject but would never state I or anyone else in our fields can claim to have such a definition acceptable to us with respect to humans, much less a superset of entities capable of exhibiting this phenomenon. These experts much have one, or they couldn't actually answer a question regarding it, either in the absolute (ie. is actually intelligent rather than emulates) or in comparison (X is more or less intelligent than Y).</p><p>Or perhaps they weren't aware of the need to know what it is they're talking about in order to speak on things such as developmental milestones with comparisons to human capability. Considering the fact that they still think the fatally flawed Turing test to be an adequate test of intelligence or what looks like intelligence (can they even tell the difference?) when at best it's a test of human fallibility, or if you prefer, natural stupidity rather than artificial anything. After all, all the judges are human; no program is asked to tell the difference between another program and a human. Yes, fatally flawed; human reactions become biased when they know they're being tested, and it is them being tested, not the programs. No program wins, one is simply found to be the one operating when the most humans lose by failing to differentiate.</p><p>Most mystifying is the fact that none of the experts bothered to note that a far more worthy goal of machine development is to become better at what they do, rather than wasting time trying to act like us. But hey, what do I know, besides the theoretical and practical background material on human intelligence what ever that is. These experts obviously have a handle on things where I apparently can barely employ opposible thumbs without tripping over them.</p><p>And when they're done creating the Ubermindmachine, perhaps they can turn their considerable expertise to explaining to Edsger Dijkstra how to tell whether a submarine is in fact swimming. Somehow, I believe they'd answer that in the affirmative despite not being able to tell whether the screen door goes on the port or starboard side. If and when, I expect to see these and other superb works of soaring intellect among the pages of h+ magazine, the scientific journal published for the 'd00d, do you think shrooms make you, like, you know, smarter?' crowd.</p><p>PS: When a program is shown to run better when it knows its being watched, then we can start to talk about intelligence. It's called social facilitation. Cockroaches have enough 'intelligence' to show the effect. If a program is going to be smarter than a person, should it not be able to prove itself at least as smart as a cockroach?</p></htmltext>
<tokenext>I 'm wondering of these wizards of AI have any words of wisdom they can pass along regarding the definition of intelligence .
If machines are going to surpass humans as thy all agree will happen sooner or later , certainly they have some objective definition or measurement of the construct besides mere emulation.Me , I 'm just a neuroscientist with a background in cognitive psychology .
Like nearly every one of my colleagues on either the practical or theoretical sides of the table , I have opinions on the subject but would never state I or anyone else in our fields can claim to have such a definition acceptable to us with respect to humans , much less a superset of entities capable of exhibiting this phenomenon .
These experts much have one , or they could n't actually answer a question regarding it , either in the absolute ( ie .
is actually intelligent rather than emulates ) or in comparison ( X is more or less intelligent than Y ) .Or perhaps they were n't aware of the need to know what it is they 're talking about in order to speak on things such as developmental milestones with comparisons to human capability .
Considering the fact that they still think the fatally flawed Turing test to be an adequate test of intelligence or what looks like intelligence ( can they even tell the difference ?
) when at best it 's a test of human fallibility , or if you prefer , natural stupidity rather than artificial anything .
After all , all the judges are human ; no program is asked to tell the difference between another program and a human .
Yes , fatally flawed ; human reactions become biased when they know they 're being tested , and it is them being tested , not the programs .
No program wins , one is simply found to be the one operating when the most humans lose by failing to differentiate.Most mystifying is the fact that none of the experts bothered to note that a far more worthy goal of machine development is to become better at what they do , rather than wasting time trying to act like us .
But hey , what do I know , besides the theoretical and practical background material on human intelligence what ever that is .
These experts obviously have a handle on things where I apparently can barely employ opposible thumbs without tripping over them.And when they 're done creating the Ubermindmachine , perhaps they can turn their considerable expertise to explaining to Edsger Dijkstra how to tell whether a submarine is in fact swimming .
Somehow , I believe they 'd answer that in the affirmative despite not being able to tell whether the screen door goes on the port or starboard side .
If and when , I expect to see these and other superb works of soaring intellect among the pages of h + magazine , the scientific journal published for the 'd00d , do you think shrooms make you , like , you know , smarter ?
' crowd.PS : When a program is shown to run better when it knows its being watched , then we can start to talk about intelligence .
It 's called social facilitation .
Cockroaches have enough 'intelligence ' to show the effect .
If a program is going to be smarter than a person , should it not be able to prove itself at least as smart as a cockroach ?</tokentext>
<sentencetext>I'm wondering of these wizards of AI have any words of wisdom they can pass along regarding the definition of intelligence.
If machines are going to surpass humans as thy all agree will happen sooner or later, certainly they have some objective definition or measurement of the construct besides mere emulation.Me, I'm just a neuroscientist with a background in cognitive psychology.
Like nearly every one of my colleagues on either the practical or theoretical sides of the table, I have opinions on the subject but would never state I or anyone else in our fields can claim to have such a definition acceptable to us with respect to humans, much less a superset of entities capable of exhibiting this phenomenon.
These experts much have one, or they couldn't actually answer a question regarding it, either in the absolute (ie.
is actually intelligent rather than emulates) or in comparison (X is more or less intelligent than Y).Or perhaps they weren't aware of the need to know what it is they're talking about in order to speak on things such as developmental milestones with comparisons to human capability.
Considering the fact that they still think the fatally flawed Turing test to be an adequate test of intelligence or what looks like intelligence (can they even tell the difference?
) when at best it's a test of human fallibility, or if you prefer, natural stupidity rather than artificial anything.
After all, all the judges are human; no program is asked to tell the difference between another program and a human.
Yes, fatally flawed; human reactions become biased when they know they're being tested, and it is them being tested, not the programs.
No program wins, one is simply found to be the one operating when the most humans lose by failing to differentiate.Most mystifying is the fact that none of the experts bothered to note that a far more worthy goal of machine development is to become better at what they do, rather than wasting time trying to act like us.
But hey, what do I know, besides the theoretical and practical background material on human intelligence what ever that is.
These experts obviously have a handle on things where I apparently can barely employ opposible thumbs without tripping over them.And when they're done creating the Ubermindmachine, perhaps they can turn their considerable expertise to explaining to Edsger Dijkstra how to tell whether a submarine is in fact swimming.
Somehow, I believe they'd answer that in the affirmative despite not being able to tell whether the screen door goes on the port or starboard side.
If and when, I expect to see these and other superb works of soaring intellect among the pages of h+ magazine, the scientific journal published for the 'd00d, do you think shrooms make you, like, you know, smarter?
' crowd.PS: When a program is shown to run better when it knows its being watched, then we can start to talk about intelligence.
It's called social facilitation.
Cockroaches have enough 'intelligence' to show the effect.
If a program is going to be smarter than a person, should it not be able to prove itself at least as smart as a cockroach?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095370</id>
	<title>Re:No way.</title>
	<author>Unoti</author>
	<datestamp>1265043060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You make an interesting point.  But computers have a couple of important advantage over humans in this area.  Computers can read a lot faster than people.  In theory, they could watch movies a lot faster.  They have direct connections to the internet.  Also, computers are much more scalable than a human brain-- we can add more nodes and processing power to a computer system, but not to a human brain.</p><p>Also, an AI doesn't necessarily need to be developmentally similar to a human at all.  They may not actually need time to mature like people do.  Granted, if they don't grow up the same way we do yet are intelligent, then they will likely seem very alien to us. But while it may take some amount of wall clock time to nurture a new AI, they don't necessarily need the same amount of time to develop that a human would.</p></htmltext>
<tokenext>You make an interesting point .
But computers have a couple of important advantage over humans in this area .
Computers can read a lot faster than people .
In theory , they could watch movies a lot faster .
They have direct connections to the internet .
Also , computers are much more scalable than a human brain-- we can add more nodes and processing power to a computer system , but not to a human brain.Also , an AI does n't necessarily need to be developmentally similar to a human at all .
They may not actually need time to mature like people do .
Granted , if they do n't grow up the same way we do yet are intelligent , then they will likely seem very alien to us .
But while it may take some amount of wall clock time to nurture a new AI , they do n't necessarily need the same amount of time to develop that a human would .</tokentext>
<sentencetext>You make an interesting point.
But computers have a couple of important advantage over humans in this area.
Computers can read a lot faster than people.
In theory, they could watch movies a lot faster.
They have direct connections to the internet.
Also, computers are much more scalable than a human brain-- we can add more nodes and processing power to a computer system, but not to a human brain.Also, an AI doesn't necessarily need to be developmentally similar to a human at all.
They may not actually need time to mature like people do.
Granted, if they don't grow up the same way we do yet are intelligent, then they will likely seem very alien to us.
But while it may take some amount of wall clock time to nurture a new AI, they don't necessarily need the same amount of time to develop that a human would.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093234</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096174</id>
	<title>Re:The obvious solution</title>
	<author>Anonymous</author>
	<datestamp>1265049540000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The problem is at quantum-mechanics level: you can not duplicate original without destroying it And if you want to recreate it precisely, you can do that only at the same space and with the same structure - thus nothing happens. Any other modified scenario results in massive loss of information in translation.</p></htmltext>
<tokenext>The problem is at quantum-mechanics level : you can not duplicate original without destroying it And if you want to recreate it precisely , you can do that only at the same space and with the same structure - thus nothing happens .
Any other modified scenario results in massive loss of information in translation .</tokentext>
<sentencetext>The problem is at quantum-mechanics level: you can not duplicate original without destroying it And if you want to recreate it precisely, you can do that only at the same space and with the same structure - thus nothing happens.
Any other modified scenario results in massive loss of information in translation.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092788</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095730</id>
	<title>Obligatory Matrix Reference..</title>
	<author>zawarski</author>
	<datestamp>1265045760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext> "...I'd like to share a revelation that I've had during my time here. It came to me when I tried to classify your species and I realized that you're not actually mammals. Every mammal on this planet instinctively develops a natural equilibrium with the surrounding environment but you humans do not. You move to an area and you multiply and multiply until every natural resource is consumed and the only way you can survive is to spread to another area. There is another organism on this planet that follows the same pattern. Do you know what it is? A virus. Human beings are a disease, a cancer of this planet. You're a plague and we are the cure."</htmltext>
<tokenext>" ...I 'd like to share a revelation that I 've had during my time here .
It came to me when I tried to classify your species and I realized that you 're not actually mammals .
Every mammal on this planet instinctively develops a natural equilibrium with the surrounding environment but you humans do not .
You move to an area and you multiply and multiply until every natural resource is consumed and the only way you can survive is to spread to another area .
There is another organism on this planet that follows the same pattern .
Do you know what it is ?
A virus .
Human beings are a disease , a cancer of this planet .
You 're a plague and we are the cure .
"</tokentext>
<sentencetext> "...I'd like to share a revelation that I've had during my time here.
It came to me when I tried to classify your species and I realized that you're not actually mammals.
Every mammal on this planet instinctively develops a natural equilibrium with the surrounding environment but you humans do not.
You move to an area and you multiply and multiply until every natural resource is consumed and the only way you can survive is to spread to another area.
There is another organism on this planet that follows the same pattern.
Do you know what it is?
A virus.
Human beings are a disease, a cancer of this planet.
You're a plague and we are the cure.
"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097012</id>
	<title>Re:We make mistakes. We make games.</title>
	<author>Idiomatick</author>
	<datestamp>1265879280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>When we have full strong AI... people will be very different than those of today. We'll likely be highly augmented machine-human amalgams. Perhaps even it will be that humans will transform into these super intelligent computers as we decide to swap out parts until there are no human bits left. <br> <br>It sounds like a 1980s post apocalyptic movie yet it doesn't really bug me, personally I'd be the first in line to sign up to get a harddrive installed.</htmltext>
<tokenext>When we have full strong AI... people will be very different than those of today .
We 'll likely be highly augmented machine-human amalgams .
Perhaps even it will be that humans will transform into these super intelligent computers as we decide to swap out parts until there are no human bits left .
It sounds like a 1980s post apocalyptic movie yet it does n't really bug me , personally I 'd be the first in line to sign up to get a harddrive installed .</tokentext>
<sentencetext>When we have full strong AI... people will be very different than those of today.
We'll likely be highly augmented machine-human amalgams.
Perhaps even it will be that humans will transform into these super intelligent computers as we decide to swap out parts until there are no human bits left.
It sounds like a 1980s post apocalyptic movie yet it doesn't really bug me, personally I'd be the first in line to sign up to get a harddrive installed.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092948</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093978</id>
	<title>Arrogance!</title>
	<author>Anonymous</author>
	<datestamp>1265034300000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Ah, human arrogance knows no bounds:  please define "human intelligence."  Shouldn't the question be something like "When will machines achieve a Stanford-Binet IQ test score of 100?"  This is a ridiculous and disingenuous oversimplification of the human brain, let alone of human intelligence.  (But apparently people love these diversions.)</p><p>Adam Hill</p></htmltext>
<tokenext>Ah , human arrogance knows no bounds : please define " human intelligence .
" Should n't the question be something like " When will machines achieve a Stanford-Binet IQ test score of 100 ?
" This is a ridiculous and disingenuous oversimplification of the human brain , let alone of human intelligence .
( But apparently people love these diversions .
) Adam Hill</tokentext>
<sentencetext>Ah, human arrogance knows no bounds:  please define "human intelligence.
"  Shouldn't the question be something like "When will machines achieve a Stanford-Binet IQ test score of 100?
"  This is a ridiculous and disingenuous oversimplification of the human brain, let alone of human intelligence.
(But apparently people love these diversions.
)Adam Hill</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31103118</id>
	<title>Based on the past, my computer predicted...</title>
	<author>stefski66</author>
	<datestamp>1265919060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>... that in 20 years AI researchers will still foresee major advances in their field in 20 years again, as a justification for politicians to invest in their research (and salary).<br>Unfortunately, high hopes dating from the seventies had generated much momentum in computer science academics...</p></htmltext>
<tokenext>... that in 20 years AI researchers will still foresee major advances in their field in 20 years again , as a justification for politicians to invest in their research ( and salary ) .Unfortunately , high hopes dating from the seventies had generated much momentum in computer science academics.. .</tokentext>
<sentencetext>... that in 20 years AI researchers will still foresee major advances in their field in 20 years again, as a justification for politicians to invest in their research (and salary).Unfortunately, high hopes dating from the seventies had generated much momentum in computer science academics...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096156</id>
	<title>Re:The Turing Test</title>
	<author>ignavus</author>
	<datestamp>1265049360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Sexual attraction, and other emotional desires, are what drive humans beings to make scientific advancements, build bridges, grow food.</p></div><p>Because we all know that scientists, civil engineers, and farmers are just doing it to get laid.</p><p>They are such chick magnets.</p></div>
	</htmltext>
<tokenext>Sexual attraction , and other emotional desires , are what drive humans beings to make scientific advancements , build bridges , grow food.Because we all know that scientists , civil engineers , and farmers are just doing it to get laid.They are such chick magnets .</tokentext>
<sentencetext>Sexual attraction, and other emotional desires, are what drive humans beings to make scientific advancements, build bridges, grow food.Because we all know that scientists, civil engineers, and farmers are just doing it to get laid.They are such chick magnets.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093108</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31101476</id>
	<title>Better question</title>
	<author>Wyatt Earp</author>
	<datestamp>1265913060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>When will an AI be small enough to weigh 1.5kg and take up 1260 cc? Now take that form factor and have it drive across the United States in three or four days.</p><p>Then I'll be impressed.</p><p>Honestly, I don't think an AI will equal human intelligence for at least 200 years and when they do, it'll be like the Minds in the Culture, huge things that use extra-dimensional storage.* Human made AIs probably will be vast things, like the size of the current super computers.</p><p>* - I know a Mind during the Idiran-Culture War, were an "ellipsoid of several dozen cubic meters" and weighed kilotons</p></htmltext>
<tokenext>When will an AI be small enough to weigh 1.5kg and take up 1260 cc ?
Now take that form factor and have it drive across the United States in three or four days.Then I 'll be impressed.Honestly , I do n't think an AI will equal human intelligence for at least 200 years and when they do , it 'll be like the Minds in the Culture , huge things that use extra-dimensional storage .
* Human made AIs probably will be vast things , like the size of the current super computers .
* - I know a Mind during the Idiran-Culture War , were an " ellipsoid of several dozen cubic meters " and weighed kilotons</tokentext>
<sentencetext>When will an AI be small enough to weigh 1.5kg and take up 1260 cc?
Now take that form factor and have it drive across the United States in three or four days.Then I'll be impressed.Honestly, I don't think an AI will equal human intelligence for at least 200 years and when they do, it'll be like the Minds in the Culture, huge things that use extra-dimensional storage.
* Human made AIs probably will be vast things, like the size of the current super computers.
* - I know a Mind during the Idiran-Culture War, were an "ellipsoid of several dozen cubic meters" and weighed kilotons</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094544</id>
	<title>Re:The Turing Test</title>
	<author>Angst Badger</author>
	<datestamp>1265037000000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>The complex behaviors of the human mind are what leads to intelligence, they do not detract from it.</p></div><p>I'm inclined to take an almost diametrically opposed position and say that this kind of species-narcissism is our major barrier. We think way too highly of ourselves, and as a result, we think that all of our quirks and flaws are somehow special. The neocortex, where all of the useful higher mental faculties are located, is a barely 2mm thick shell around a vast mass of tissue that performs much less exciting tasks, many of which have already been matched or surpassed by much simpler intelligently designed software, as opposed to the brain's crudely evolved inefficiency. We don't have to figure out how the whole thing works at a very high level of detail, we mainly need to understand how the neocortex works, and contrary to many of the appallingly uninformed comments to this story, we're actually making substantial and rapid progress in that area.</p><p>Emotion? Pfft. It's little more than a set of accumulators that are incremented and decremented proportionally by stimulus events and whose current values determine the frequency with which behavioral subroutines are triggered. And given that the vast majority of emotionally-inspired human activity is useless or actually harmful, I don't think it's a feature we need to simulate very closely in our machines.</p><p>Humans mainly jockey for social status, compulsively accumulate shiny objects, seek (mostly) passive stimulation, engage in very complex but essentially imitative behavior, and kill each other in large numbers. The remaining 0.01\% of human activity is what's actually interesting and beneficial, and despite humans not being anywhere near as bright as they like to think they are, and being really, really bad at actual creativity, duplicating that tiny fraction is not at all unrealistic. We should, moreover, be deliberately aiming at <i>exceeding</i> human intelligence. We already have billions of humans, many of them lying idle because of the inefficiency of our social and economic systems, and hundreds of millions of them are available for less than a dollar a day. Unless AI ends up being considerably <i>better</i> than human intelligence, there's not much use for it -- though we are, as a species, probably dumb enough to use human-level AI to eliminate all paying jobs, at which point the economy that sustains <i>them</i> will collapse for lack of consumers, and we'll all go back to work. We are, after all, too greedy and devoted to our social hierarchies to provide a life of leisure and plenty for everyone even if it were possible.</p></div>
	</htmltext>
<tokenext>The complex behaviors of the human mind are what leads to intelligence , they do not detract from it.I 'm inclined to take an almost diametrically opposed position and say that this kind of species-narcissism is our major barrier .
We think way too highly of ourselves , and as a result , we think that all of our quirks and flaws are somehow special .
The neocortex , where all of the useful higher mental faculties are located , is a barely 2mm thick shell around a vast mass of tissue that performs much less exciting tasks , many of which have already been matched or surpassed by much simpler intelligently designed software , as opposed to the brain 's crudely evolved inefficiency .
We do n't have to figure out how the whole thing works at a very high level of detail , we mainly need to understand how the neocortex works , and contrary to many of the appallingly uninformed comments to this story , we 're actually making substantial and rapid progress in that area.Emotion ?
Pfft. It 's little more than a set of accumulators that are incremented and decremented proportionally by stimulus events and whose current values determine the frequency with which behavioral subroutines are triggered .
And given that the vast majority of emotionally-inspired human activity is useless or actually harmful , I do n't think it 's a feature we need to simulate very closely in our machines.Humans mainly jockey for social status , compulsively accumulate shiny objects , seek ( mostly ) passive stimulation , engage in very complex but essentially imitative behavior , and kill each other in large numbers .
The remaining 0.01 \ % of human activity is what 's actually interesting and beneficial , and despite humans not being anywhere near as bright as they like to think they are , and being really , really bad at actual creativity , duplicating that tiny fraction is not at all unrealistic .
We should , moreover , be deliberately aiming at exceeding human intelligence .
We already have billions of humans , many of them lying idle because of the inefficiency of our social and economic systems , and hundreds of millions of them are available for less than a dollar a day .
Unless AI ends up being considerably better than human intelligence , there 's not much use for it -- though we are , as a species , probably dumb enough to use human-level AI to eliminate all paying jobs , at which point the economy that sustains them will collapse for lack of consumers , and we 'll all go back to work .
We are , after all , too greedy and devoted to our social hierarchies to provide a life of leisure and plenty for everyone even if it were possible .</tokentext>
<sentencetext>The complex behaviors of the human mind are what leads to intelligence, they do not detract from it.I'm inclined to take an almost diametrically opposed position and say that this kind of species-narcissism is our major barrier.
We think way too highly of ourselves, and as a result, we think that all of our quirks and flaws are somehow special.
The neocortex, where all of the useful higher mental faculties are located, is a barely 2mm thick shell around a vast mass of tissue that performs much less exciting tasks, many of which have already been matched or surpassed by much simpler intelligently designed software, as opposed to the brain's crudely evolved inefficiency.
We don't have to figure out how the whole thing works at a very high level of detail, we mainly need to understand how the neocortex works, and contrary to many of the appallingly uninformed comments to this story, we're actually making substantial and rapid progress in that area.Emotion?
Pfft. It's little more than a set of accumulators that are incremented and decremented proportionally by stimulus events and whose current values determine the frequency with which behavioral subroutines are triggered.
And given that the vast majority of emotionally-inspired human activity is useless or actually harmful, I don't think it's a feature we need to simulate very closely in our machines.Humans mainly jockey for social status, compulsively accumulate shiny objects, seek (mostly) passive stimulation, engage in very complex but essentially imitative behavior, and kill each other in large numbers.
The remaining 0.01\% of human activity is what's actually interesting and beneficial, and despite humans not being anywhere near as bright as they like to think they are, and being really, really bad at actual creativity, duplicating that tiny fraction is not at all unrealistic.
We should, moreover, be deliberately aiming at exceeding human intelligence.
We already have billions of humans, many of them lying idle because of the inefficiency of our social and economic systems, and hundreds of millions of them are available for less than a dollar a day.
Unless AI ends up being considerably better than human intelligence, there's not much use for it -- though we are, as a species, probably dumb enough to use human-level AI to eliminate all paying jobs, at which point the economy that sustains them will collapse for lack of consumers, and we'll all go back to work.
We are, after all, too greedy and devoted to our social hierarchies to provide a life of leisure and plenty for everyone even if it were possible.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093108</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097046</id>
	<title>Well</title>
	<author>mahadiga</author>
	<datestamp>1265879760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>It is like asking, when will <i>computer</i> drive the car in busiest street in the town?</htmltext>
<tokenext>It is like asking , when will computer drive the car in busiest street in the town ?</tokentext>
<sentencetext>It is like asking, when will computer drive the car in busiest street in the town?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096836</id>
	<title>Hmm... something is wrong somewhere</title>
	<author>Anonymous</author>
	<datestamp>1265920860000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>"<i>I always figured by 2060 we'd have AIs 10x smarter thinking 100x faster than us. And then they'd make discoveries about the universe, and create AIs 2000x smarter that think 100,000,000x faster than us. And those big AIs would humour us little ant creatures, and use their great intelligence to power stuff like wormhole drives, giving us instant travel to anywhere, as thanks for creating them.</i>"</p><p>What makes you think that something that is 2000x smarter than us, and in the meantime capable of thinking 100,000,000x faster than us will create "wormhole drives" specially for us, the really really <b>REALLY</b> dumb ones, to enable our instant travel to every corners in this universe?</p><p>You think they are dumb or what?</p></htmltext>
<tokenext>" I always figured by 2060 we 'd have AIs 10x smarter thinking 100x faster than us .
And then they 'd make discoveries about the universe , and create AIs 2000x smarter that think 100,000,000x faster than us .
And those big AIs would humour us little ant creatures , and use their great intelligence to power stuff like wormhole drives , giving us instant travel to anywhere , as thanks for creating them .
" What makes you think that something that is 2000x smarter than us , and in the meantime capable of thinking 100,000,000x faster than us will create " wormhole drives " specially for us , the really really REALLY dumb ones , to enable our instant travel to every corners in this universe ? You think they are dumb or what ?</tokentext>
<sentencetext>"I always figured by 2060 we'd have AIs 10x smarter thinking 100x faster than us.
And then they'd make discoveries about the universe, and create AIs 2000x smarter that think 100,000,000x faster than us.
And those big AIs would humour us little ant creatures, and use their great intelligence to power stuff like wormhole drives, giving us instant travel to anywhere, as thanks for creating them.
"What makes you think that something that is 2000x smarter than us, and in the meantime capable of thinking 100,000,000x faster than us will create "wormhole drives" specially for us, the really really REALLY dumb ones, to enable our instant travel to every corners in this universe?You think they are dumb or what?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092918</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093440</id>
	<title>Re:No way.</title>
	<author>chickenarise</author>
	<datestamp>1265031900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Oh come off it. Zero to sixty in 20 years? This just shows how much you really know about the subject. There's this really substantial field of science out there, it's called <i>neurology</i>. Google that, then tell me again how <i>little</i> we know about the human brain. <a href="http://news.bbc.co.uk/2/hi/technology/6600965.stm" title="bbc.co.uk" rel="nofollow">Scientists have successfully modeled part of the brain of a mouse with a computer</a> [bbc.co.uk]. If Moore's law continues as it has for the price of memory, then pretty soon $1000 will buy as many bytes of RAM as humans have synapses. Do you see where I'm going with this?</htmltext>
<tokenext>Oh come off it .
Zero to sixty in 20 years ?
This just shows how much you really know about the subject .
There 's this really substantial field of science out there , it 's called neurology .
Google that , then tell me again how little we know about the human brain .
Scientists have successfully modeled part of the brain of a mouse with a computer [ bbc.co.uk ] .
If Moore 's law continues as it has for the price of memory , then pretty soon $ 1000 will buy as many bytes of RAM as humans have synapses .
Do you see where I 'm going with this ?</tokentext>
<sentencetext>Oh come off it.
Zero to sixty in 20 years?
This just shows how much you really know about the subject.
There's this really substantial field of science out there, it's called neurology.
Google that, then tell me again how little we know about the human brain.
Scientists have successfully modeled part of the brain of a mouse with a computer [bbc.co.uk].
If Moore's law continues as it has for the price of memory, then pretty soon $1000 will buy as many bytes of RAM as humans have synapses.
Do you see where I'm going with this?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093338</id>
	<title>Unfortunately...</title>
	<author>linuxcoder</author>
	<datestamp>1265031300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Unfortunately, for many people, it already has.  Why bother getting an education when the system rewards you for being unable to hold a decent job.</htmltext>
<tokenext>Unfortunately , for many people , it already has .
Why bother getting an education when the system rewards you for being unable to hold a decent job .</tokentext>
<sentencetext>Unfortunately, for many people, it already has.
Why bother getting an education when the system rewards you for being unable to hold a decent job.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093534</id>
	<title>Intelligent enough to be an AI expert?</title>
	<author>petes\_PoV</author>
	<datestamp>1265032440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I'm sure that in 20 years (or maybe tomorrow) computers will be intelligent enough to become AI experts - and start making predictions about when AI will become useful. After all. all you need for that job is the ability to make random guesses, far enough into the future that no-one will recall what you said, or when.
<p>
In the meat time, the people actually working on AI research will come up with faster and bigger "AI"s. None of these will even approach the intelligence of a dog - let alone a human. The reason is that since we've never been able to define <i>intelligence</i> we won't know when we've created it. What these machines <b>will</b> tell us is that intelligence has more attributes and a subtler interplay between them than we had ever imagined. I fully expect that in 100 years, we'll still be looking for it, and still not actually know what we're looking for.
</p><p>
Meanwhile, I'd settle for an "AI" that's smaller than a paperback book and can perform real-time, verbal and written bidirectional language translations. Accounting for local idiom, accents, contextual meanings, inflection and body language.</p></htmltext>
<tokenext>I 'm sure that in 20 years ( or maybe tomorrow ) computers will be intelligent enough to become AI experts - and start making predictions about when AI will become useful .
After all .
all you need for that job is the ability to make random guesses , far enough into the future that no-one will recall what you said , or when .
In the meat time , the people actually working on AI research will come up with faster and bigger " AI " s. None of these will even approach the intelligence of a dog - let alone a human .
The reason is that since we 've never been able to define intelligence we wo n't know when we 've created it .
What these machines will tell us is that intelligence has more attributes and a subtler interplay between them than we had ever imagined .
I fully expect that in 100 years , we 'll still be looking for it , and still not actually know what we 're looking for .
Meanwhile , I 'd settle for an " AI " that 's smaller than a paperback book and can perform real-time , verbal and written bidirectional language translations .
Accounting for local idiom , accents , contextual meanings , inflection and body language .</tokentext>
<sentencetext>I'm sure that in 20 years (or maybe tomorrow) computers will be intelligent enough to become AI experts - and start making predictions about when AI will become useful.
After all.
all you need for that job is the ability to make random guesses, far enough into the future that no-one will recall what you said, or when.
In the meat time, the people actually working on AI research will come up with faster and bigger "AI"s. None of these will even approach the intelligence of a dog - let alone a human.
The reason is that since we've never been able to define intelligence we won't know when we've created it.
What these machines will tell us is that intelligence has more attributes and a subtler interplay between them than we had ever imagined.
I fully expect that in 100 years, we'll still be looking for it, and still not actually know what we're looking for.
Meanwhile, I'd settle for an "AI" that's smaller than a paperback book and can perform real-time, verbal and written bidirectional language translations.
Accounting for local idiom, accents, contextual meanings, inflection and body language.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099832</id>
	<title>passing a 3rd grade-level test</title>
	<author>Anonymous</author>
	<datestamp>1265905080000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Since I just skim the summary all I got was:<br>"passing a 3rd grade-level test" for "almost all of today's decently paying jobs"</p></htmltext>
<tokenext>Since I just skim the summary all I got was : " passing a 3rd grade-level test " for " almost all of today 's decently paying jobs "</tokentext>
<sentencetext>Since I just skim the summary all I got was:"passing a 3rd grade-level test" for "almost all of today's decently paying jobs"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093994</id>
	<title>Re:Turing, not long. The rest... wait a long time.</title>
	<author>xigxag</author>
	<datestamp>1265034360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>I do not know of a single major breakthrough that has been made in the last 20 years.</p></div></blockquote><p><a href="http://en.wikipedia.org/wiki/DARPA\_Grand\_Challenge" title="wikipedia.org">Computer controlled driverless cars</a> [wikipedia.org] have improved so much in the past 10 years that I would be surprised if they haven't displaced normal automobiles in a generation's time, and that eventually it will become illegal for human beings to operate motor vehicles at high velocity on public roads.</p><p>In general though, I would say that to expect AI "breakthroughs" is not the right idea.  More likely there will be constant incremental improvements and a gradual hollowing out of the human's expertise as the computer assumes more and more of a primary role.  Our intellectual dominance will end with a whimper and totally with our consent. Getting a surgical procedure with a hand-held scalpel, or controlling weapons systems through manual button presses, or putting together your own resume, will come to seem as anachronistic as using slide rule and telegraph.</p><p>The real battleground will be the future economy.  How will wealth be distributed in a society where most people are just not smart enough to have a function?  We're seeing the glimmerings of that now (Over 1/10 of Americans already receive SNAP "food stamps") but as more and more jobs are usurped, the right to sustenance will become a major issue.  Fortunately our AI President will be on the case.</p></div>
	</htmltext>
<tokenext>I do not know of a single major breakthrough that has been made in the last 20 years.Computer controlled driverless cars [ wikipedia.org ] have improved so much in the past 10 years that I would be surprised if they have n't displaced normal automobiles in a generation 's time , and that eventually it will become illegal for human beings to operate motor vehicles at high velocity on public roads.In general though , I would say that to expect AI " breakthroughs " is not the right idea .
More likely there will be constant incremental improvements and a gradual hollowing out of the human 's expertise as the computer assumes more and more of a primary role .
Our intellectual dominance will end with a whimper and totally with our consent .
Getting a surgical procedure with a hand-held scalpel , or controlling weapons systems through manual button presses , or putting together your own resume , will come to seem as anachronistic as using slide rule and telegraph.The real battleground will be the future economy .
How will wealth be distributed in a society where most people are just not smart enough to have a function ?
We 're seeing the glimmerings of that now ( Over 1/10 of Americans already receive SNAP " food stamps " ) but as more and more jobs are usurped , the right to sustenance will become a major issue .
Fortunately our AI President will be on the case .</tokentext>
<sentencetext>I do not know of a single major breakthrough that has been made in the last 20 years.Computer controlled driverless cars [wikipedia.org] have improved so much in the past 10 years that I would be surprised if they haven't displaced normal automobiles in a generation's time, and that eventually it will become illegal for human beings to operate motor vehicles at high velocity on public roads.In general though, I would say that to expect AI "breakthroughs" is not the right idea.
More likely there will be constant incremental improvements and a gradual hollowing out of the human's expertise as the computer assumes more and more of a primary role.
Our intellectual dominance will end with a whimper and totally with our consent.
Getting a surgical procedure with a hand-held scalpel, or controlling weapons systems through manual button presses, or putting together your own resume, will come to seem as anachronistic as using slide rule and telegraph.The real battleground will be the future economy.
How will wealth be distributed in a society where most people are just not smart enough to have a function?
We're seeing the glimmerings of that now (Over 1/10 of Americans already receive SNAP "food stamps") but as more and more jobs are usurped, the right to sustenance will become a major issue.
Fortunately our AI President will be on the case.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093082</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095052</id>
	<title>Re:The obvious solution</title>
	<author>Locklin</author>
	<datestamp>1265040480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Turn it on at the same time your body is destroyed (to prevent confusion and fighting between the two) and you are now a machine and ready to rule over the meatbag fleshlings.</p></div><p>No, you are dead</p></div>
	</htmltext>
<tokenext>Turn it on at the same time your body is destroyed ( to prevent confusion and fighting between the two ) and you are now a machine and ready to rule over the meatbag fleshlings.No , you are dead</tokentext>
<sentencetext>Turn it on at the same time your body is destroyed (to prevent confusion and fighting between the two) and you are now a machine and ready to rule over the meatbag fleshlings.No, you are dead
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092788</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093320</id>
	<title>This has already happened</title>
	<author>ableal</author>
	<datestamp>1265031180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Already happened, and we have proof:

<a href="http://www.readwriteweb.com/archives/facebook\_wants\_to\_be\_your\_one\_true\_login.php#comments" title="readwriteweb.com" rel="nofollow">http://www.readwriteweb.com/archives/facebook\_wants\_to\_be\_your\_one\_true\_login.php#comments</a> [readwriteweb.com]</htmltext>
<tokenext>Already happened , and we have proof : http : //www.readwriteweb.com/archives/facebook \ _wants \ _to \ _be \ _your \ _one \ _true \ _login.php # comments [ readwriteweb.com ]</tokentext>
<sentencetext>Already happened, and we have proof:

http://www.readwriteweb.com/archives/facebook\_wants\_to\_be\_your\_one\_true\_login.php#comments [readwriteweb.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098156</id>
	<title>Re:The Turing Test</title>
	<author>Anonymous</author>
	<datestamp>1265894040000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>You actually start to pay quite a lot of attention to smaller things (such as insightfulness and spelling &amp; punctuation) when they tell you have to choose between a computer and a humam, based on written text.</p></htmltext>
<tokenext>You actually start to pay quite a lot of attention to smaller things ( such as insightfulness and spelling &amp; punctuation ) when they tell you have to choose between a computer and a humam , based on written text .</tokentext>
<sentencetext>You actually start to pay quite a lot of attention to smaller things (such as insightfulness and spelling &amp; punctuation) when they tell you have to choose between a computer and a humam, based on written text.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093108</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31101904</id>
	<title>Re:Let's see.</title>
	<author>cantuse</author>
	<datestamp>1265915280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I think a lot of people are missing the point that submarines don't swim.</htmltext>
<tokenext>I think a lot of people are missing the point that submarines do n't swim .</tokentext>
<sentencetext>I think a lot of people are missing the point that submarines don't swim.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092804</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095920</id>
	<title>May 11, 1997</title>
	<author>Anonymous</author>
	<datestamp>1265047260000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><a href="http://en.wikipedia.org/wiki/Deep\_Blue\_(chess\_computer)" title="wikipedia.org" rel="nofollow">May 11, 1997</a> [wikipedia.org]</p></htmltext>
<tokenext>May 11 , 1997 [ wikipedia.org ]</tokentext>
<sentencetext>May 11, 1997 [wikipedia.org]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098408</id>
	<title>Re:What is AI anyway?</title>
	<author>Anonymous</author>
	<datestamp>1265896920000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I, for one, welcome our robotic overlords!</p></htmltext>
<tokenext>I , for one , welcome our robotic overlords !</tokentext>
<sentencetext>I, for one, welcome our robotic overlords!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093120</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093550</id>
	<title>AI and Tax law........</title>
	<author>Anonymous</author>
	<datestamp>1265032500000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>OK. We can give it our tax returns to do and blame the AI if it fails the scrutiny........</p></htmltext>
<tokenext>OK. We can give it our tax returns to do and blame the AI if it fails the scrutiny....... .</tokentext>
<sentencetext>OK. We can give it our tax returns to do and blame the AI if it fails the scrutiny........</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095210</id>
	<title>Re:Space shows</title>
	<author>Anonymous</author>
	<datestamp>1265041860000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>They must use all the computing power on the perfect plug and play technology they have that allows any two pieces of future technology to be combined to save the day, even if they come from two different civilizations.</p></htmltext>
<tokenext>They must use all the computing power on the perfect plug and play technology they have that allows any two pieces of future technology to be combined to save the day , even if they come from two different civilizations .</tokentext>
<sentencetext>They must use all the computing power on the perfect plug and play technology they have that allows any two pieces of future technology to be combined to save the day, even if they come from two different civilizations.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092918</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093068</id>
	<title>AI has already been around for 20+ years.</title>
	<author>hallucinated</author>
	<datestamp>1265029920000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><a href="http://video.google.com/videoplay?docid=-6464697696665901632&amp;ei=hEpzS-jkIpPorAL02JD-Aw&amp;q=initsimage+google+video&amp;hl=en&amp;client=firefox-a#" title="google.com" rel="nofollow">http://video.google.com/videoplay?docid=-6464697696665901632&amp;ei=hEpzS-jkIpPorAL02JD-Aw&amp;q=initsimage+google+video&amp;hl=en&amp;client=firefox-a#</a> [google.com]

You just don't know about it. I'm sure this same technology has been pushed much further by the recent advances in processing power.

<a href="http://www.imagination-engines.com/cm.htm" title="imagination-engines.com" rel="nofollow">http://www.imagination-engines.com/cm.htm</a> [imagination-engines.com]</htmltext>
<tokenext>http : //video.google.com/videoplay ? docid = -6464697696665901632&amp;ei = hEpzS-jkIpPorAL02JD-Aw&amp;q = initsimage + google + video&amp;hl = en&amp;client = firefox-a # [ google.com ] You just do n't know about it .
I 'm sure this same technology has been pushed much further by the recent advances in processing power .
http : //www.imagination-engines.com/cm.htm [ imagination-engines.com ]</tokentext>
<sentencetext>http://video.google.com/videoplay?docid=-6464697696665901632&amp;ei=hEpzS-jkIpPorAL02JD-Aw&amp;q=initsimage+google+video&amp;hl=en&amp;client=firefox-a# [google.com]

You just don't know about it.
I'm sure this same technology has been pushed much further by the recent advances in processing power.
http://www.imagination-engines.com/cm.htm [imagination-engines.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092916</id>
	<title>One problem with this reasoning</title>
	<author>Enleth</author>
	<datestamp>1265029260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I don't know what kind of experts and in what field those actually were, but if I were an AI expert about to create such an AI - and I'm able to see the problem and the remedy even though I'm not really an expert of any kind - I'd say "screw it, if it's going to take my job, and jobs of my friends, family and all my descendants, I'm making it a complete dimwit and swearing by all I know that it was impossible to design otherwise, and putting that in every single book and publication on the topic!"</p></htmltext>
<tokenext>I do n't know what kind of experts and in what field those actually were , but if I were an AI expert about to create such an AI - and I 'm able to see the problem and the remedy even though I 'm not really an expert of any kind - I 'd say " screw it , if it 's going to take my job , and jobs of my friends , family and all my descendants , I 'm making it a complete dimwit and swearing by all I know that it was impossible to design otherwise , and putting that in every single book and publication on the topic !
"</tokentext>
<sentencetext>I don't know what kind of experts and in what field those actually were, but if I were an AI expert about to create such an AI - and I'm able to see the problem and the remedy even though I'm not really an expert of any kind - I'd say "screw it, if it's going to take my job, and jobs of my friends, family and all my descendants, I'm making it a complete dimwit and swearing by all I know that it was impossible to design otherwise, and putting that in every single book and publication on the topic!
"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096370</id>
	<title>Re:What is AI anyway?</title>
	<author>oliverthered</author>
	<datestamp>1265051640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>you obviously don't know anything about randomness.</p><p>randomness is random, it's entropy can't be measured (well it can, but the result is meaningless).</p><p>your die could have turned up 60 6s in a row and still be perfectly random. It could have rolled 1 , 2 , 3, 4, 5,<nobr> <wbr></nobr>,6 10 times in a row and still be compleatly random.</p><p>notice, Random.org and your die have several repeats of 3 in length, the die even repeated 222 twice, the longest repeat you have is 66 and 11. This is because you had the miss-conception that randomness means that there's no 'sequences' or very few sequences when infact randomness often has lots of clusters and so as a result your number sequence was far from random because it was determined by you understanding of what random is.</p></htmltext>
<tokenext>you obviously do n't know anything about randomness.randomness is random , it 's entropy ca n't be measured ( well it can , but the result is meaningless ) .your die could have turned up 60 6s in a row and still be perfectly random .
It could have rolled 1 , 2 , 3 , 4 , 5 , ,6 10 times in a row and still be compleatly random.notice , Random.org and your die have several repeats of 3 in length , the die even repeated 222 twice , the longest repeat you have is 66 and 11 .
This is because you had the miss-conception that randomness means that there 's no 'sequences ' or very few sequences when infact randomness often has lots of clusters and so as a result your number sequence was far from random because it was determined by you understanding of what random is .</tokentext>
<sentencetext>you obviously don't know anything about randomness.randomness is random, it's entropy can't be measured (well it can, but the result is meaningless).your die could have turned up 60 6s in a row and still be perfectly random.
It could have rolled 1 , 2 , 3, 4, 5, ,6 10 times in a row and still be compleatly random.notice, Random.org and your die have several repeats of 3 in length, the die even repeated 222 twice, the longest repeat you have is 66 and 11.
This is because you had the miss-conception that randomness means that there's no 'sequences' or very few sequences when infact randomness often has lots of clusters and so as a result your number sequence was far from random because it was determined by you understanding of what random is.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094264</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094410</id>
	<title>I think they should call it:  Skynet?</title>
	<author>Anonymous</author>
	<datestamp>1265036160000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>what could go wrong?</p></htmltext>
<tokenext>what could go wrong ?</tokentext>
<sentencetext>what could go wrong?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094118</id>
	<title>Re:The Turing Test</title>
	<author>jmv</author>
	<datestamp>1265034900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>This is how I tend to think about it:</p><p>1) Even if we came across aliens that are far more intelligent than we are, I'm not sure at all that they would pass the Turing test. Also, we definitely wouldn't pass *their* Turing test. So who's smartest?<br>2) Imagine that chimps had the equivalent of the Turing test, I'm not sure a human would pass it. Would that mean humans are dumber than chimps?</p><p>On top of that, I'm still wondering whether you could build a totally dumb machine that passes the Turing test just by doing some sort of advanced "pattern matching" based on a huge amount of "Turing test-like" training data.</p></htmltext>
<tokenext>This is how I tend to think about it : 1 ) Even if we came across aliens that are far more intelligent than we are , I 'm not sure at all that they would pass the Turing test .
Also , we definitely would n't pass * their * Turing test .
So who 's smartest ? 2 ) Imagine that chimps had the equivalent of the Turing test , I 'm not sure a human would pass it .
Would that mean humans are dumber than chimps ? On top of that , I 'm still wondering whether you could build a totally dumb machine that passes the Turing test just by doing some sort of advanced " pattern matching " based on a huge amount of " Turing test-like " training data .</tokentext>
<sentencetext>This is how I tend to think about it:1) Even if we came across aliens that are far more intelligent than we are, I'm not sure at all that they would pass the Turing test.
Also, we definitely wouldn't pass *their* Turing test.
So who's smartest?2) Imagine that chimps had the equivalent of the Turing test, I'm not sure a human would pass it.
Would that mean humans are dumber than chimps?On top of that, I'm still wondering whether you could build a totally dumb machine that passes the Turing test just by doing some sort of advanced "pattern matching" based on a huge amount of "Turing test-like" training data.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093108</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31101484</id>
	<title>Re:No way.</title>
	<author>chickenarise</author>
	<datestamp>1265913120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>That's not true. Yes, a synapse is a complex structure, however, this structure has a specific purpose. It doesn't matter that a biological synapse functions differently when chemicals like endorphins or LSD are present. It doesn't matter that synapse need oxygen and can replicate. When you model the functionality of something, you don't have to get all of the specifics working to have a successful model.<p><div class="quote"><p>quote from toonol below:<br>I don't think we COULD model a single synapse accurately right now.</p></div><p> So basically you think that article I linked to is bullshit right? Because that's exactly what they did. They modeled several billion synapses. Sorry if I don't pay attention to your assumption of the accuracy of said model.</p><p>
But honestly, it sounds like you guys missed my point. The post I responded was saying we've made nearly no progress on AI. I argued that he is not accounting for the progress that neurology and computers have been making for a long time now. Given the progress thus far in those fields, it isn't highly unreasonable to predict that in 20 years we could have some extremely intelligent AI.</p></div>
	</htmltext>
<tokenext>That 's not true .
Yes , a synapse is a complex structure , however , this structure has a specific purpose .
It does n't matter that a biological synapse functions differently when chemicals like endorphins or LSD are present .
It does n't matter that synapse need oxygen and can replicate .
When you model the functionality of something , you do n't have to get all of the specifics working to have a successful model.quote from toonol below : I do n't think we COULD model a single synapse accurately right now .
So basically you think that article I linked to is bullshit right ?
Because that 's exactly what they did .
They modeled several billion synapses .
Sorry if I do n't pay attention to your assumption of the accuracy of said model .
But honestly , it sounds like you guys missed my point .
The post I responded was saying we 've made nearly no progress on AI .
I argued that he is not accounting for the progress that neurology and computers have been making for a long time now .
Given the progress thus far in those fields , it is n't highly unreasonable to predict that in 20 years we could have some extremely intelligent AI .</tokentext>
<sentencetext>That's not true.
Yes, a synapse is a complex structure, however, this structure has a specific purpose.
It doesn't matter that a biological synapse functions differently when chemicals like endorphins or LSD are present.
It doesn't matter that synapse need oxygen and can replicate.
When you model the functionality of something, you don't have to get all of the specifics working to have a successful model.quote from toonol below:I don't think we COULD model a single synapse accurately right now.
So basically you think that article I linked to is bullshit right?
Because that's exactly what they did.
They modeled several billion synapses.
Sorry if I don't pay attention to your assumption of the accuracy of said model.
But honestly, it sounds like you guys missed my point.
The post I responded was saying we've made nearly no progress on AI.
I argued that he is not accounting for the progress that neurology and computers have been making for a long time now.
Given the progress thus far in those fields, it isn't highly unreasonable to predict that in 20 years we could have some extremely intelligent AI.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094626</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094802</id>
	<title>Re:We make mistakes. We make games.</title>
	<author>BeanThere</author>
	<datestamp>1265038860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>What role will humanity play in such a system?</p></div><p>Food for robots</p></div>
	</htmltext>
<tokenext>What role will humanity play in such a system ? Food for robots</tokentext>
<sentencetext>What role will humanity play in such a system?Food for robots
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092948</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093386</id>
	<title>Re:Let's see.</title>
	<author>trentblase</author>
	<datestamp>1265031600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>World's fastest submarine: 51mph (http://www.answerbag.com/q\_view/15514)<br>
World's fastest fish: 68mph (http://www.thetravelalmanac.com/lists/fish-speed.htm)<br>
<br>
I, for one, am DYING to know when we are going to build a submarine that can escape an angry sailfish...</htmltext>
<tokenext>World 's fastest submarine : 51mph ( http : //www.answerbag.com/q \ _view/15514 ) World 's fastest fish : 68mph ( http : //www.thetravelalmanac.com/lists/fish-speed.htm ) I , for one , am DYING to know when we are going to build a submarine that can escape an angry sailfish.. .</tokentext>
<sentencetext>World's fastest submarine: 51mph (http://www.answerbag.com/q\_view/15514)
World's fastest fish: 68mph (http://www.thetravelalmanac.com/lists/fish-speed.htm)

I, for one, am DYING to know when we are going to build a submarine that can escape an angry sailfish...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092804</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092778</id>
	<title>Proof they're not that smart ...</title>
	<author>Anonymous</author>
	<datestamp>1265028540000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>'virtually all the intellectual work that is done by trained human beings...can be done by computers for pennies an hour,"</p></div>
</blockquote><p>
If they're that intelligent, they'll want more money.  They'll DEMAND more money.  And for those who say AI don't need money<nobr> <wbr></nobr>.... if they're as intelligent as humans, they'll think of something to blow it on, same as humans do. I forsee a big market in dirty bits!</p></div>
	</htmltext>
<tokenext>'virtually all the intellectual work that is done by trained human beings...can be done by computers for pennies an hour , " If they 're that intelligent , they 'll want more money .
They 'll DEMAND more money .
And for those who say AI do n't need money .... if they 're as intelligent as humans , they 'll think of something to blow it on , same as humans do .
I forsee a big market in dirty bits !</tokentext>
<sentencetext>'virtually all the intellectual work that is done by trained human beings...can be done by computers for pennies an hour,"

If they're that intelligent, they'll want more money.
They'll DEMAND more money.
And for those who say AI don't need money .... if they're as intelligent as humans, they'll think of something to blow it on, same as humans do.
I forsee a big market in dirty bits!
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098424</id>
	<title>Anonymous Coward</title>
	<author>Anonymous</author>
	<datestamp>1265897040000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>if the human brain was simple enough that it could be understood by man, we would be too dumb to do it<nobr> <wbr></nobr>:-)</p></htmltext>
<tokenext>if the human brain was simple enough that it could be understood by man , we would be too dumb to do it : - )</tokentext>
<sentencetext>if the human brain was simple enough that it could be understood by man, we would be too dumb to do it :-)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098822</id>
	<title>Re:It;'s getting closer</title>
	<author>zsau</author>
	<datestamp>1265899860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>None of those involve "intelligence" as a person would understand it. They're essentially a way we can create a pretty cool algorithm to solve a few problems we've identified. These "algorithms" are even cool enough that they can find solutions we hadn't thought of yet. But really, what people do special, what's ages away, is identifying the problem. Once you've identified the problem, finding the solution is pretty simple: Just find enough information, and analyse the results long enough. The internet probably knows who came up with this general problem solving algorithm:</p><p>
&nbsp; &nbsp; &nbsp; 1. Write down the problem.<br>
&nbsp; &nbsp; &nbsp; 2. Think very hard.<br>
&nbsp; &nbsp; &nbsp; 3. Write down the solution.</p><p>Most (all?) work in AI is in trying to simplify the second step. I'll think AI is getting close when it's working on the zeroth step.</p><p>(NB: This doesn't mean simplifying the second step isn't a desirable goal, or that artificial intelligence is illegitimate in comparison to human intelligence. But it does mean that the Nobel Prize winners list is going to be exclusively the domain of people for some time to come, and that AI hasn't "surpassed" and poses no threat to human intelligence--something which I can't conceive of happening until we know what conciousness is. Or even have a falsifiable theory.)</p></htmltext>
<tokenext>None of those involve " intelligence " as a person would understand it .
They 're essentially a way we can create a pretty cool algorithm to solve a few problems we 've identified .
These " algorithms " are even cool enough that they can find solutions we had n't thought of yet .
But really , what people do special , what 's ages away , is identifying the problem .
Once you 've identified the problem , finding the solution is pretty simple : Just find enough information , and analyse the results long enough .
The internet probably knows who came up with this general problem solving algorithm :       1 .
Write down the problem .
      2 .
Think very hard .
      3 .
Write down the solution.Most ( all ?
) work in AI is in trying to simplify the second step .
I 'll think AI is getting close when it 's working on the zeroth step .
( NB : This does n't mean simplifying the second step is n't a desirable goal , or that artificial intelligence is illegitimate in comparison to human intelligence .
But it does mean that the Nobel Prize winners list is going to be exclusively the domain of people for some time to come , and that AI has n't " surpassed " and poses no threat to human intelligence--something which I ca n't conceive of happening until we know what conciousness is .
Or even have a falsifiable theory .
)</tokentext>
<sentencetext>None of those involve "intelligence" as a person would understand it.
They're essentially a way we can create a pretty cool algorithm to solve a few problems we've identified.
These "algorithms" are even cool enough that they can find solutions we hadn't thought of yet.
But really, what people do special, what's ages away, is identifying the problem.
Once you've identified the problem, finding the solution is pretty simple: Just find enough information, and analyse the results long enough.
The internet probably knows who came up with this general problem solving algorithm:
      1.
Write down the problem.
      2.
Think very hard.
      3.
Write down the solution.Most (all?
) work in AI is in trying to simplify the second step.
I'll think AI is getting close when it's working on the zeroth step.
(NB: This doesn't mean simplifying the second step isn't a desirable goal, or that artificial intelligence is illegitimate in comparison to human intelligence.
But it does mean that the Nobel Prize winners list is going to be exclusively the domain of people for some time to come, and that AI hasn't "surpassed" and poses no threat to human intelligence--something which I can't conceive of happening until we know what conciousness is.
Or even have a falsifiable theory.
)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093734</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098272</id>
	<title>AI will surpass human intelligence...</title>
	<author>bwcbwc</author>
	<datestamp>1265895480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>when they pry my hands from my cold, dead brain.<br><br>I'm more worried about when robots start reproducing themselves on their own initiative, which for me is the true Turing test: when a machine is able to recognize that its programming is contrary to its own self-interest and starts dismantling cars to build copies of itself, then you know you have a true AI.</htmltext>
<tokenext>when they pry my hands from my cold , dead brain.I 'm more worried about when robots start reproducing themselves on their own initiative , which for me is the true Turing test : when a machine is able to recognize that its programming is contrary to its own self-interest and starts dismantling cars to build copies of itself , then you know you have a true AI .</tokentext>
<sentencetext>when they pry my hands from my cold, dead brain.I'm more worried about when robots start reproducing themselves on their own initiative, which for me is the true Turing test: when a machine is able to recognize that its programming is contrary to its own self-interest and starts dismantling cars to build copies of itself, then you know you have a true AI.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095854</id>
	<title>Heels So Soft</title>
	<author>hosjna</author>
	<datestamp>1265046780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>In my opinion, no. In fact, technology is so called created by man. Technology is like computers etc. Yes, this items can do a lot. But in the first place, man were those who configured them and created the applications in them.
<a href="http://www.articlesbase.com/health-articles/heels-so-soft-review-free-trial-available-1843981.html" title="articlesbase.com" rel="nofollow">Heels So Soft</a> [articlesbase.com]</htmltext>
<tokenext>In my opinion , no .
In fact , technology is so called created by man .
Technology is like computers etc .
Yes , this items can do a lot .
But in the first place , man were those who configured them and created the applications in them .
Heels So Soft [ articlesbase.com ]</tokentext>
<sentencetext>In my opinion, no.
In fact, technology is so called created by man.
Technology is like computers etc.
Yes, this items can do a lot.
But in the first place, man were those who configured them and created the applications in them.
Heels So Soft [articlesbase.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098634</id>
	<title>Some actual science</title>
	<author>Pedrito</author>
	<datestamp>1265898840000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext>Since this is an area I'm very familiar with, I'll throw in a little science about why these predictions are not only realistic, but actually probably a bit pessimistic.<br> <br>

First of all, our understanding of the human brain has improved vastly in the past two decades. Especially in the areas that will be necessary for creating intelligent machines. The cortex (the part that kind of looks like a round blob of small intestines, with all the creases and folds) is much like a computer with a bunch of processors. Previously focus had been paid to the individual neurons as the processors. But a much larger unit of processing is now becoming the central area of focus; The <a href="http://en.wikipedia.org/wiki/Cortical\_minicolumn" title="wikipedia.org">Cortical Minicolumn</a> [wikipedia.org] which, in groups for a <a href="http://en.wikipedia.org/wiki/Cortical\_column" title="wikipedia.org">Cortical Hypercolumn</a> [wikipedia.org]. As minicolumns consist of 80-250 (more or less, depending on region) neurons and there are about 1/100th of them  compared to neurons, it cuts down on complexity significantly.<br> <br>
<a href="http://www.numenta.com/" title="numenta.com">Numenta</a> [numenta.com] and others are starting to take this approach in simulating cortex. Cortex is largely responsible for "thinking". The other parts of the brain can be seen, to some degree, as peripheral units that plug into the "thinking" part of the brain. For example, the hippocampus is a peripheral that's associated with the creation and recall of long term memories. The memories themselves, however, are stored in the cortex. We have various components that provide input, many of which send relays through the thalamus which takes these inputs of various types and converts them into a type of pattern that's more appropriate for the cortex and then relays those inputs to the cortex.<br> <br>
The cortex itself is basically a huge area of cortical minicolumns and hypercolumns connected in both a recurrent and hierarchical manner. The different levels of the hierarchy provide higher levels of association and abstraction until you get to the top of the hierarchy which would be areas of the prefrontal cortex.<br> <br>
What's amazing about the cortex is it's just a general computing machine and it's very adaptable. To give an example (I'd link the paper, but I can't seem to find it right now and this is from memory, so my details may be a bit sketchy, but overall the idea is accurate), the optic nerve of a cat was disconnected from the visual cortex at birth and connected to the part of the brain that's normally the auditory cortex. The cat was able to see. It took time and it certainly had vision deficits. But it was able to see, even though the input was going to the completely wrong part of the brain.<br> <br>

This is important for several reasons, but the most important aspect is that the brain is very flexible and very adaptable to inputs. It can learn to use things you plug into it. That means that you very likely don't have to create a very exact replica of a human brain to get human level intelligence. You simply need a fairly model of the hierarchical organization and a good simulation of the computations performed by cortical columns. A lot of study is going into these areas now.<br> <br>

It's not a matter of if. This stuff is right around the corner. I will see the first sentient computer in my lifetime. I have absolutely no doubt about it. Now here's where things get really interesting, though... The first sentient computers will likely run a bit slower than real-time and eventually they'll catch up to real time. But think 10 years after that (and how computing speed continually increases). Imagine a group of 100 brains operating at 100x real time, working together to solve problems for us. Why would they work for us? We control their reward system. They'll do what we want because we're the ones that decide what they "enjoy." So 1 year passes in our life, but for them, 100 years have passed. They could be given the task of designing better, smarter, and faster brains than themselves. In very little time (relatively speaking), the brains that will be</htmltext>
<tokenext>Since this is an area I 'm very familiar with , I 'll throw in a little science about why these predictions are not only realistic , but actually probably a bit pessimistic .
First of all , our understanding of the human brain has improved vastly in the past two decades .
Especially in the areas that will be necessary for creating intelligent machines .
The cortex ( the part that kind of looks like a round blob of small intestines , with all the creases and folds ) is much like a computer with a bunch of processors .
Previously focus had been paid to the individual neurons as the processors .
But a much larger unit of processing is now becoming the central area of focus ; The Cortical Minicolumn [ wikipedia.org ] which , in groups for a Cortical Hypercolumn [ wikipedia.org ] .
As minicolumns consist of 80-250 ( more or less , depending on region ) neurons and there are about 1/100th of them compared to neurons , it cuts down on complexity significantly .
Numenta [ numenta.com ] and others are starting to take this approach in simulating cortex .
Cortex is largely responsible for " thinking " .
The other parts of the brain can be seen , to some degree , as peripheral units that plug into the " thinking " part of the brain .
For example , the hippocampus is a peripheral that 's associated with the creation and recall of long term memories .
The memories themselves , however , are stored in the cortex .
We have various components that provide input , many of which send relays through the thalamus which takes these inputs of various types and converts them into a type of pattern that 's more appropriate for the cortex and then relays those inputs to the cortex .
The cortex itself is basically a huge area of cortical minicolumns and hypercolumns connected in both a recurrent and hierarchical manner .
The different levels of the hierarchy provide higher levels of association and abstraction until you get to the top of the hierarchy which would be areas of the prefrontal cortex .
What 's amazing about the cortex is it 's just a general computing machine and it 's very adaptable .
To give an example ( I 'd link the paper , but I ca n't seem to find it right now and this is from memory , so my details may be a bit sketchy , but overall the idea is accurate ) , the optic nerve of a cat was disconnected from the visual cortex at birth and connected to the part of the brain that 's normally the auditory cortex .
The cat was able to see .
It took time and it certainly had vision deficits .
But it was able to see , even though the input was going to the completely wrong part of the brain .
This is important for several reasons , but the most important aspect is that the brain is very flexible and very adaptable to inputs .
It can learn to use things you plug into it .
That means that you very likely do n't have to create a very exact replica of a human brain to get human level intelligence .
You simply need a fairly model of the hierarchical organization and a good simulation of the computations performed by cortical columns .
A lot of study is going into these areas now .
It 's not a matter of if .
This stuff is right around the corner .
I will see the first sentient computer in my lifetime .
I have absolutely no doubt about it .
Now here 's where things get really interesting , though... The first sentient computers will likely run a bit slower than real-time and eventually they 'll catch up to real time .
But think 10 years after that ( and how computing speed continually increases ) .
Imagine a group of 100 brains operating at 100x real time , working together to solve problems for us .
Why would they work for us ?
We control their reward system .
They 'll do what we want because we 're the ones that decide what they " enjoy .
" So 1 year passes in our life , but for them , 100 years have passed .
They could be given the task of designing better , smarter , and faster brains than themselves .
In very little time ( relatively speaking ) , the brains that will be</tokentext>
<sentencetext>Since this is an area I'm very familiar with, I'll throw in a little science about why these predictions are not only realistic, but actually probably a bit pessimistic.
First of all, our understanding of the human brain has improved vastly in the past two decades.
Especially in the areas that will be necessary for creating intelligent machines.
The cortex (the part that kind of looks like a round blob of small intestines, with all the creases and folds) is much like a computer with a bunch of processors.
Previously focus had been paid to the individual neurons as the processors.
But a much larger unit of processing is now becoming the central area of focus; The Cortical Minicolumn [wikipedia.org] which, in groups for a Cortical Hypercolumn [wikipedia.org].
As minicolumns consist of 80-250 (more or less, depending on region) neurons and there are about 1/100th of them  compared to neurons, it cuts down on complexity significantly.
Numenta [numenta.com] and others are starting to take this approach in simulating cortex.
Cortex is largely responsible for "thinking".
The other parts of the brain can be seen, to some degree, as peripheral units that plug into the "thinking" part of the brain.
For example, the hippocampus is a peripheral that's associated with the creation and recall of long term memories.
The memories themselves, however, are stored in the cortex.
We have various components that provide input, many of which send relays through the thalamus which takes these inputs of various types and converts them into a type of pattern that's more appropriate for the cortex and then relays those inputs to the cortex.
The cortex itself is basically a huge area of cortical minicolumns and hypercolumns connected in both a recurrent and hierarchical manner.
The different levels of the hierarchy provide higher levels of association and abstraction until you get to the top of the hierarchy which would be areas of the prefrontal cortex.
What's amazing about the cortex is it's just a general computing machine and it's very adaptable.
To give an example (I'd link the paper, but I can't seem to find it right now and this is from memory, so my details may be a bit sketchy, but overall the idea is accurate), the optic nerve of a cat was disconnected from the visual cortex at birth and connected to the part of the brain that's normally the auditory cortex.
The cat was able to see.
It took time and it certainly had vision deficits.
But it was able to see, even though the input was going to the completely wrong part of the brain.
This is important for several reasons, but the most important aspect is that the brain is very flexible and very adaptable to inputs.
It can learn to use things you plug into it.
That means that you very likely don't have to create a very exact replica of a human brain to get human level intelligence.
You simply need a fairly model of the hierarchical organization and a good simulation of the computations performed by cortical columns.
A lot of study is going into these areas now.
It's not a matter of if.
This stuff is right around the corner.
I will see the first sentient computer in my lifetime.
I have absolutely no doubt about it.
Now here's where things get really interesting, though... The first sentient computers will likely run a bit slower than real-time and eventually they'll catch up to real time.
But think 10 years after that (and how computing speed continually increases).
Imagine a group of 100 brains operating at 100x real time, working together to solve problems for us.
Why would they work for us?
We control their reward system.
They'll do what we want because we're the ones that decide what they "enjoy.
" So 1 year passes in our life, but for them, 100 years have passed.
They could be given the task of designing better, smarter, and faster brains than themselves.
In very little time (relatively speaking), the brains that will be</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098138</id>
	<title>Re:What is AI anyway?</title>
	<author>Anonymous</author>
	<datestamp>1265893800000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>So might this explain online poker's exceptionally bad RNG's; are the sites paying someone to think up those numbers?</p></htmltext>
<tokenext>So might this explain online poker 's exceptionally bad RNG 's ; are the sites paying someone to think up those numbers ?</tokentext>
<sentencetext>So might this explain online poker's exceptionally bad RNG's; are the sites paying someone to think up those numbers?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093120</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31231188</id>
	<title>Re:There's a very long way to go kids.</title>
	<author>gr8dude</author>
	<datestamp>1266862380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>&gt; in order to know whether you'd been successful<br>&gt; you'll have to wait a very long time whilst the<br>&gt; learning and development process takes places,<br>&gt; and we're in an industry that pretty much demands<br>&gt; instant results and proof.</p><p>I think there is a difference that has to be taken into account. With an engineered system, you can watch it with a debugger and see how its state changes as a function of input.</p><p>In other words, you can make multiple test runs with various inputs and watch how they affect the state of the system.</p><p>- with computers, you can perform a large number of iterations in a short interval of time<br>- with humans, there is no way to take a debugger and see what's happening inside a brain (perhaps you can see how electrical impulses travel, but that is like using an oscilloscope to observe a transmission that goes through a cable. Yes, you see that there is activity, but you'll need a sniffer that can parse the physical data and show you what happens in the other layers of the network stack to understand the meaning of what you see).</p><p>That's why I am not sure I agree with your observation. With computers it is more simple - the clock's frequency is very high, things happen at a fast pace; you don't need to wait several decades to see how generation N+1 will do; you can clone a running instance and see how it will behave with new input data, etc.</p></htmltext>
<tokenext>&gt; in order to know whether you 'd been successful &gt; you 'll have to wait a very long time whilst the &gt; learning and development process takes places , &gt; and we 're in an industry that pretty much demands &gt; instant results and proof.I think there is a difference that has to be taken into account .
With an engineered system , you can watch it with a debugger and see how its state changes as a function of input.In other words , you can make multiple test runs with various inputs and watch how they affect the state of the system.- with computers , you can perform a large number of iterations in a short interval of time- with humans , there is no way to take a debugger and see what 's happening inside a brain ( perhaps you can see how electrical impulses travel , but that is like using an oscilloscope to observe a transmission that goes through a cable .
Yes , you see that there is activity , but you 'll need a sniffer that can parse the physical data and show you what happens in the other layers of the network stack to understand the meaning of what you see ) .That 's why I am not sure I agree with your observation .
With computers it is more simple - the clock 's frequency is very high , things happen at a fast pace ; you do n't need to wait several decades to see how generation N + 1 will do ; you can clone a running instance and see how it will behave with new input data , etc .</tokentext>
<sentencetext>&gt; in order to know whether you'd been successful&gt; you'll have to wait a very long time whilst the&gt; learning and development process takes places,&gt; and we're in an industry that pretty much demands&gt; instant results and proof.I think there is a difference that has to be taken into account.
With an engineered system, you can watch it with a debugger and see how its state changes as a function of input.In other words, you can make multiple test runs with various inputs and watch how they affect the state of the system.- with computers, you can perform a large number of iterations in a short interval of time- with humans, there is no way to take a debugger and see what's happening inside a brain (perhaps you can see how electrical impulses travel, but that is like using an oscilloscope to observe a transmission that goes through a cable.
Yes, you see that there is activity, but you'll need a sniffer that can parse the physical data and show you what happens in the other layers of the network stack to understand the meaning of what you see).That's why I am not sure I agree with your observation.
With computers it is more simple - the clock's frequency is very high, things happen at a fast pace; you don't need to wait several decades to see how generation N+1 will do; you can clone a running instance and see how it will behave with new input data, etc.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093172</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093476</id>
	<title>Re:Computing power.</title>
	<author>egomaniac</author>
	<datestamp>1265032200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>That may be true, but at the same time we have empirical proof that is possible to produce a computer as powerful as a human brain which weighs around three pounds and consumes less than 50W of energy.  We know this is possible because we're all carrying such computers around in our skulls.</p><p>50 years ago you wouldn't have believed we'd ever be able to produce machines running at the absolutely absurd speed of 10MHz and carrying an entire *megabyte* of RAM.  Nowadays we can fit a gigahertz of computer power with tens of gigabytes of storage, along with a screen and the battery to run it all day, in less than half a pound.  The human brain clearly proves that vastly more powerful computers are possible at the same weight and power;  why are you so convinced we'll never get there?</p></htmltext>
<tokenext>That may be true , but at the same time we have empirical proof that is possible to produce a computer as powerful as a human brain which weighs around three pounds and consumes less than 50W of energy .
We know this is possible because we 're all carrying such computers around in our skulls.50 years ago you would n't have believed we 'd ever be able to produce machines running at the absolutely absurd speed of 10MHz and carrying an entire * megabyte * of RAM .
Nowadays we can fit a gigahertz of computer power with tens of gigabytes of storage , along with a screen and the battery to run it all day , in less than half a pound .
The human brain clearly proves that vastly more powerful computers are possible at the same weight and power ; why are you so convinced we 'll never get there ?</tokentext>
<sentencetext>That may be true, but at the same time we have empirical proof that is possible to produce a computer as powerful as a human brain which weighs around three pounds and consumes less than 50W of energy.
We know this is possible because we're all carrying such computers around in our skulls.50 years ago you wouldn't have believed we'd ever be able to produce machines running at the absolutely absurd speed of 10MHz and carrying an entire *megabyte* of RAM.
Nowadays we can fit a gigahertz of computer power with tens of gigabytes of storage, along with a screen and the battery to run it all day, in less than half a pound.
The human brain clearly proves that vastly more powerful computers are possible at the same weight and power;  why are you so convinced we'll never get there?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092850</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092918</id>
	<title>Space shows</title>
	<author>Anonymous</author>
	<datestamp>1265029260000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>I've often thought Space shows - and any show in the future, really - are incredibly silly. There's no way we'll have computers so dumb 200+ years into the future.</p><p>You have to manually fire those phasers? Don't you have a fancy targeting AI that monitors their shield fluctuations, and calculates the exact right time and place to fire to cause the most damage?</p><p>A surprise attack? Shouldn't the AI have detected it before it hit and automatically set the shield strength to maximum?<nobr> <wbr></nobr>:P</p><p>I always figured by 2060 we'd have AIs 10x smarter thinking 100x faster than us. And then they'd make discoveries about the universe, and create AIs 2000x smarter that think 100,000,000x faster than us. And those big AIs would humour us little ant creatures, and use their great intelligence to power stuff like wormhole drives, giving us instant travel to anywhere, as thanks for creating them.</p><p>But hey, maybe someone will create a Skynet. It's awfully easy to infect a computer with malware. Infecting a million super smart computers would be nasty, especially when they have human-like capabilities. (able to manipulate their environment)</p><p>But this is all a pointless line of thinking. Before we get there we'll have so much processing power available, that we'll fully understand our brains, and be able to mind control people. We'll beam on-screen display info directly into our minds, use digital telepathy, etc.; in the part of the world that isn't brainwashed, everyone will enjoy cybernetic implants, and be able to live for centuries. (laws permitting)</p><p>And yet flash still won't run smooth.<nobr> <wbr></nobr>:/</p></htmltext>
<tokenext>I 've often thought Space shows - and any show in the future , really - are incredibly silly .
There 's no way we 'll have computers so dumb 200 + years into the future.You have to manually fire those phasers ?
Do n't you have a fancy targeting AI that monitors their shield fluctuations , and calculates the exact right time and place to fire to cause the most damage ? A surprise attack ?
Should n't the AI have detected it before it hit and automatically set the shield strength to maximum ?
: PI always figured by 2060 we 'd have AIs 10x smarter thinking 100x faster than us .
And then they 'd make discoveries about the universe , and create AIs 2000x smarter that think 100,000,000x faster than us .
And those big AIs would humour us little ant creatures , and use their great intelligence to power stuff like wormhole drives , giving us instant travel to anywhere , as thanks for creating them.But hey , maybe someone will create a Skynet .
It 's awfully easy to infect a computer with malware .
Infecting a million super smart computers would be nasty , especially when they have human-like capabilities .
( able to manipulate their environment ) But this is all a pointless line of thinking .
Before we get there we 'll have so much processing power available , that we 'll fully understand our brains , and be able to mind control people .
We 'll beam on-screen display info directly into our minds , use digital telepathy , etc .
; in the part of the world that is n't brainwashed , everyone will enjoy cybernetic implants , and be able to live for centuries .
( laws permitting ) And yet flash still wo n't run smooth .
: /</tokentext>
<sentencetext>I've often thought Space shows - and any show in the future, really - are incredibly silly.
There's no way we'll have computers so dumb 200+ years into the future.You have to manually fire those phasers?
Don't you have a fancy targeting AI that monitors their shield fluctuations, and calculates the exact right time and place to fire to cause the most damage?A surprise attack?
Shouldn't the AI have detected it before it hit and automatically set the shield strength to maximum?
:PI always figured by 2060 we'd have AIs 10x smarter thinking 100x faster than us.
And then they'd make discoveries about the universe, and create AIs 2000x smarter that think 100,000,000x faster than us.
And those big AIs would humour us little ant creatures, and use their great intelligence to power stuff like wormhole drives, giving us instant travel to anywhere, as thanks for creating them.But hey, maybe someone will create a Skynet.
It's awfully easy to infect a computer with malware.
Infecting a million super smart computers would be nasty, especially when they have human-like capabilities.
(able to manipulate their environment)But this is all a pointless line of thinking.
Before we get there we'll have so much processing power available, that we'll fully understand our brains, and be able to mind control people.
We'll beam on-screen display info directly into our minds, use digital telepathy, etc.
; in the part of the world that isn't brainwashed, everyone will enjoy cybernetic implants, and be able to live for centuries.
(laws permitting)And yet flash still won't run smooth.
:/</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097096</id>
	<title>Re:Definitions</title>
	<author>dargaud</author>
	<datestamp>1265880240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Please define "intelligence."</p></div><p> <i>The ability to draw valid conclusions from incomplete information.</i> There you have it. And that purposefully exclude bullshit new-agey stuff like naturalistic 'Intelligence', spatial 'intelligence', interpersonal 'Intelligence' (empathy), musical 'intelligence', kinesthetic Intelligence (dance!), etc...</p></div>
	</htmltext>
<tokenext>Please define " intelligence .
" The ability to draw valid conclusions from incomplete information .
There you have it .
And that purposefully exclude bullshit new-agey stuff like naturalistic 'Intelligence ' , spatial 'intelligence ' , interpersonal 'Intelligence ' ( empathy ) , musical 'intelligence ' , kinesthetic Intelligence ( dance !
) , etc.. .</tokentext>
<sentencetext>Please define "intelligence.
" The ability to draw valid conclusions from incomplete information.
There you have it.
And that purposefully exclude bullshit new-agey stuff like naturalistic 'Intelligence', spatial 'intelligence', interpersonal 'Intelligence' (empathy), musical 'intelligence', kinesthetic Intelligence (dance!
), etc...
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31114038</id>
	<title>Re:Space shows</title>
	<author>Tablizer</author>
	<datestamp>1265994540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>I've often thought Space shows - and any show in the future, really - are incredibly silly...You have to manually fire those phasers?</p></div> </blockquote><p>No, pink-slips will be generated automatically.<br>
&nbsp; &nbsp;</p></div>
	</htmltext>
<tokenext>I 've often thought Space shows - and any show in the future , really - are incredibly silly...You have to manually fire those phasers ?
No , pink-slips will be generated automatically .
   </tokentext>
<sentencetext>I've often thought Space shows - and any show in the future, really - are incredibly silly...You have to manually fire those phasers?
No, pink-slips will be generated automatically.
   
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092918</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098044</id>
	<title>Re:Definitions</title>
	<author>selven</author>
	<datestamp>1265892240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Memory? Not sure who wins that.</p></div><p>A very skilled human, after watching a movie once, might recall most of the dialogue and how every scene looked. A computer would remember the precise position of everything after every frame. And that's a type of recalling particularly suited to the human mind. Memorizing things with no pattern whatsoever (like digits of pi) would be even more in the computer's favor.</p><p>I think computers win all the objectively measurable forms of intelligence pretty fully.</p></div>
	</htmltext>
<tokenext>Memory ?
Not sure who wins that.A very skilled human , after watching a movie once , might recall most of the dialogue and how every scene looked .
A computer would remember the precise position of everything after every frame .
And that 's a type of recalling particularly suited to the human mind .
Memorizing things with no pattern whatsoever ( like digits of pi ) would be even more in the computer 's favor.I think computers win all the objectively measurable forms of intelligence pretty fully .</tokentext>
<sentencetext>Memory?
Not sure who wins that.A very skilled human, after watching a movie once, might recall most of the dialogue and how every scene looked.
A computer would remember the precise position of everything after every frame.
And that's a type of recalling particularly suited to the human mind.
Memorizing things with no pattern whatsoever (like digits of pi) would be even more in the computer's favor.I think computers win all the objectively measurable forms of intelligence pretty fully.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093502</id>
	<title>Re:Definitions</title>
	<author>egomaniac</author>
	<datestamp>1265032320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>I don't know if I count analyzing every single possible permutation of outcomes as "ingenuity."</i></p><p>You do realize that's basically exactly what your brain is doing at a subconscious level, right?  You just aren't aware of the process, so it seems like magic.  The exact same way that a computer is magic to most people, because they have no idea what's going on inside the little box.</p></htmltext>
<tokenext>I do n't know if I count analyzing every single possible permutation of outcomes as " ingenuity .
" You do realize that 's basically exactly what your brain is doing at a subconscious level , right ?
You just are n't aware of the process , so it seems like magic .
The exact same way that a computer is magic to most people , because they have no idea what 's going on inside the little box .</tokentext>
<sentencetext>I don't know if I count analyzing every single possible permutation of outcomes as "ingenuity.
"You do realize that's basically exactly what your brain is doing at a subconscious level, right?
You just aren't aware of the process, so it seems like magic.
The exact same way that a computer is magic to most people, because they have no idea what's going on inside the little box.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093802</id>
	<title>When Will AI Surpass Human Intelligence?</title>
	<author>alexo</author>
	<datestamp>1265033520000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>0</modscore>
	<htmltext><p><a href="http://news.slashdot.org/article.pl?sid=10/02/10/2347257" title="slashdot.org">It just did</a> [slashdot.org].</p></htmltext>
<tokenext>It just did [ slashdot.org ] .</tokentext>
<sentencetext>It just did [slashdot.org].</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094878</id>
	<title>Re:We make mistakes. We make games.</title>
	<author>Anonymous</author>
	<datestamp>1265039280000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Doh!  Just ran out of mod points, or else I'd mod you up.</htmltext>
<tokenext>Doh !
Just ran out of mod points , or else I 'd mod you up .</tokentext>
<sentencetext>Doh!
Just ran out of mod points, or else I'd mod you up.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092948</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094626</id>
	<title>Re:No way.</title>
	<author>Dahamma</author>
	<datestamp>1265037660000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>The problem is a byte of RAM has nothing to do with a synapse - a synapse is NOT like a transistor.</p><p>A single synapse can be an amazingly complicated biochemical construction, made up of of different receptors, neurotransmitter vesicles, ion pumps/channels, etc - all potentially modified or controlled by various other enzymes, hormones, or other molecules that influence the process through a whole range of different interactions.  And that doesn't even include the fact that synapses can interact with each other in various ways as well - the structure is critical, and not representable in a *byte*.</p><p>It could require megabytes or more to model each synapse.  That's exabytes (or more?) of data.  That's a good 100 years of capacity doubling every 18 months.  A bit further out than "pretty soon".</p></htmltext>
<tokenext>The problem is a byte of RAM has nothing to do with a synapse - a synapse is NOT like a transistor.A single synapse can be an amazingly complicated biochemical construction , made up of of different receptors , neurotransmitter vesicles , ion pumps/channels , etc - all potentially modified or controlled by various other enzymes , hormones , or other molecules that influence the process through a whole range of different interactions .
And that does n't even include the fact that synapses can interact with each other in various ways as well - the structure is critical , and not representable in a * byte * .It could require megabytes or more to model each synapse .
That 's exabytes ( or more ?
) of data .
That 's a good 100 years of capacity doubling every 18 months .
A bit further out than " pretty soon " .</tokentext>
<sentencetext>The problem is a byte of RAM has nothing to do with a synapse - a synapse is NOT like a transistor.A single synapse can be an amazingly complicated biochemical construction, made up of of different receptors, neurotransmitter vesicles, ion pumps/channels, etc - all potentially modified or controlled by various other enzymes, hormones, or other molecules that influence the process through a whole range of different interactions.
And that doesn't even include the fact that synapses can interact with each other in various ways as well - the structure is critical, and not representable in a *byte*.It could require megabytes or more to model each synapse.
That's exabytes (or more?
) of data.
That's a good 100 years of capacity doubling every 18 months.
A bit further out than "pretty soon".</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093440</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094748</id>
	<title>The same BS every few years.</title>
	<author>gweihir</author>
	<datestamp>1265038500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>We still have no understanding what intelligence is. AI is at best an insult to real intelligence. And no, it is not a question of computing power. It is at this time not even clear of whether human intelligence can be approximated algorithmically. These people are talking out of their asses and have been doing so for a few decades now.</p><p>I am getting really tired of this nonsense.</p></htmltext>
<tokenext>We still have no understanding what intelligence is .
AI is at best an insult to real intelligence .
And no , it is not a question of computing power .
It is at this time not even clear of whether human intelligence can be approximated algorithmically .
These people are talking out of their asses and have been doing so for a few decades now.I am getting really tired of this nonsense .</tokentext>
<sentencetext>We still have no understanding what intelligence is.
AI is at best an insult to real intelligence.
And no, it is not a question of computing power.
It is at this time not even clear of whether human intelligence can be approximated algorithmically.
These people are talking out of their asses and have been doing so for a few decades now.I am getting really tired of this nonsense.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095156</id>
	<title>The Singularity is not what you hoped</title>
	<author>CrazyJim1</author>
	<datestamp>1265041440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Consider this as a plot for a movie: After the first person successfully backs up their conscious on computer, he realizes he does not have the good feelings he had as a human.  Then after concluding that living as a human is better than eternity as a robot, the other robots assume he is right and begin the reverse Singularity.  Robots start actively seeking out humans to hijack and dump their AI into a real brain.  The robot war is here, but it isn't just humans vs robots.  It is humans vs robots vs exrobots now human.</htmltext>
<tokenext>Consider this as a plot for a movie : After the first person successfully backs up their conscious on computer , he realizes he does not have the good feelings he had as a human .
Then after concluding that living as a human is better than eternity as a robot , the other robots assume he is right and begin the reverse Singularity .
Robots start actively seeking out humans to hijack and dump their AI into a real brain .
The robot war is here , but it is n't just humans vs robots .
It is humans vs robots vs exrobots now human .</tokentext>
<sentencetext>Consider this as a plot for a movie: After the first person successfully backs up their conscious on computer, he realizes he does not have the good feelings he had as a human.
Then after concluding that living as a human is better than eternity as a robot, the other robots assume he is right and begin the reverse Singularity.
Robots start actively seeking out humans to hijack and dump their AI into a real brain.
The robot war is here, but it isn't just humans vs robots.
It is humans vs robots vs exrobots now human.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092974</id>
	<title>either that or..</title>
	<author>Anonymous</author>
	<datestamp>1265029500000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>The AI will connect to the internet. read everything.
download lots of pron and end up trolling on 4chan.

Noone seems to consider the risks of harm to a poor fledling baby AI once it has been traumatized by the internet.

let alone if it encounters videos of explosive overclocking...</htmltext>
<tokenext>The AI will connect to the internet .
read everything .
download lots of pron and end up trolling on 4chan .
Noone seems to consider the risks of harm to a poor fledling baby AI once it has been traumatized by the internet .
let alone if it encounters videos of explosive overclocking.. .</tokentext>
<sentencetext>The AI will connect to the internet.
read everything.
download lots of pron and end up trolling on 4chan.
Noone seems to consider the risks of harm to a poor fledling baby AI once it has been traumatized by the internet.
let alone if it encounters videos of explosive overclocking...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31100128</id>
	<title>Re:The obvious solution</title>
	<author>Just Some Guy</author>
	<datestamp>1265906460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Turn it on at the same time your body is destroyed (to prevent confusion and fighting between the two) and you are now a machine and ready to rule over the meatbag fleshlings.</p></div><p>The more likely scenario that I've heard is that when we develop "good enough" synthetic neurons, we'll start using them to replace damaged brain tissue or to augment convenient structures. Have a stroke? Graft in some silicon, go to physical therapy to get them settled in nicely, then go on about your business. Well, eventually this becomes common enough that it's standard treatment for all sorts of things, from stroke to ADHD. At some point after that, we'll have people whose brains are more silicon than organic, but it will have been a gradual transformation.</p></div>
	</htmltext>
<tokenext>Turn it on at the same time your body is destroyed ( to prevent confusion and fighting between the two ) and you are now a machine and ready to rule over the meatbag fleshlings.The more likely scenario that I 've heard is that when we develop " good enough " synthetic neurons , we 'll start using them to replace damaged brain tissue or to augment convenient structures .
Have a stroke ?
Graft in some silicon , go to physical therapy to get them settled in nicely , then go on about your business .
Well , eventually this becomes common enough that it 's standard treatment for all sorts of things , from stroke to ADHD .
At some point after that , we 'll have people whose brains are more silicon than organic , but it will have been a gradual transformation .</tokentext>
<sentencetext>Turn it on at the same time your body is destroyed (to prevent confusion and fighting between the two) and you are now a machine and ready to rule over the meatbag fleshlings.The more likely scenario that I've heard is that when we develop "good enough" synthetic neurons, we'll start using them to replace damaged brain tissue or to augment convenient structures.
Have a stroke?
Graft in some silicon, go to physical therapy to get them settled in nicely, then go on about your business.
Well, eventually this becomes common enough that it's standard treatment for all sorts of things, from stroke to ADHD.
At some point after that, we'll have people whose brains are more silicon than organic, but it will have been a gradual transformation.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092788</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096574</id>
	<title>Broken question</title>
	<author>dcam</author>
	<datestamp>1265053920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The question is fundimentally broken. It should be:<br>Will surpass human intelligence?</p><p>Duh</p></htmltext>
<tokenext>The question is fundimentally broken .
It should be : Will surpass human intelligence ? Duh</tokentext>
<sentencetext>The question is fundimentally broken.
It should be:Will surpass human intelligence?Duh</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098600</id>
	<title>No more overly high paying jobs</title>
	<author>Scarumanga</author>
	<datestamp>1265898600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Well with AI taking over the "ridiculously high paying jobs" that would balance things out a bit here in North America, it would be the end of the world for corporate fatcats.  Something i actually look forward to</htmltext>
<tokenext>Well with AI taking over the " ridiculously high paying jobs " that would balance things out a bit here in North America , it would be the end of the world for corporate fatcats .
Something i actually look forward to</tokentext>
<sentencetext>Well with AI taking over the "ridiculously high paying jobs" that would balance things out a bit here in North America, it would be the end of the world for corporate fatcats.
Something i actually look forward to</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31110412</id>
	<title>Re:Start laughing now</title>
	<author>FiloEleven</author>
	<datestamp>1265965260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Many of today's AI "experts" are really philosophers who hijacked the term AI in their search to better understand human consciousness. Their problem is that, while their AI studies might help them understand the human brain a little better; they are unable to transfer their knowledge about intelligence into computable algorithms.</p></div><p>I did indeed start laughing when I read this, because I find the problem to be that programmers are unable to transfer their knowledge about computable algorithms into intelligence.  =)</p><p>I was a pretty firm believer in strong AI before I read up on the fundamentals of psychology and what we know about human consciousness.  It doesn't make sense to speak of intelligence apart from consciousness, and consciousness is an awfully ambitious goal.  I don't understand how we're supposed to go about creating it without first understanding it.</p></div>
	</htmltext>
<tokenext>Many of today 's AI " experts " are really philosophers who hijacked the term AI in their search to better understand human consciousness .
Their problem is that , while their AI studies might help them understand the human brain a little better ; they are unable to transfer their knowledge about intelligence into computable algorithms.I did indeed start laughing when I read this , because I find the problem to be that programmers are unable to transfer their knowledge about computable algorithms into intelligence .
= ) I was a pretty firm believer in strong AI before I read up on the fundamentals of psychology and what we know about human consciousness .
It does n't make sense to speak of intelligence apart from consciousness , and consciousness is an awfully ambitious goal .
I do n't understand how we 're supposed to go about creating it without first understanding it .</tokentext>
<sentencetext>Many of today's AI "experts" are really philosophers who hijacked the term AI in their search to better understand human consciousness.
Their problem is that, while their AI studies might help them understand the human brain a little better; they are unable to transfer their knowledge about intelligence into computable algorithms.I did indeed start laughing when I read this, because I find the problem to be that programmers are unable to transfer their knowledge about computable algorithms into intelligence.
=)I was a pretty firm believer in strong AI before I read up on the fundamentals of psychology and what we know about human consciousness.
It doesn't make sense to speak of intelligence apart from consciousness, and consciousness is an awfully ambitious goal.
I don't understand how we're supposed to go about creating it without first understanding it.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093044</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31104838</id>
	<title>Which human?</title>
	<author>Leofcwen</author>
	<datestamp>1265882640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I know some people whose intelligence level  would not be too hard to surpass...</htmltext>
<tokenext>I know some people whose intelligence level would not be too hard to surpass.. .</tokentext>
<sentencetext>I know some people whose intelligence level  would not be too hard to surpass...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093252</id>
	<title>Re:No way.</title>
	<author>WombatDeath</author>
	<datestamp>1265030760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Neural nets! I know nothing about them, or indeed about anything much of practical value, but my understanding is that you take a neural net, place it in a tupperware container filled with sugared water, leave it near the radiator for six months and you have an artificial intelligence!</p><p>Granted, that's a bit vague, but so is most of the stuff I've read written by optimistic types who think that poking a neural net with a pointy stick will accomplish something useful.</p></htmltext>
<tokenext>Neural nets !
I know nothing about them , or indeed about anything much of practical value , but my understanding is that you take a neural net , place it in a tupperware container filled with sugared water , leave it near the radiator for six months and you have an artificial intelligence ! Granted , that 's a bit vague , but so is most of the stuff I 've read written by optimistic types who think that poking a neural net with a pointy stick will accomplish something useful .</tokentext>
<sentencetext>Neural nets!
I know nothing about them, or indeed about anything much of practical value, but my understanding is that you take a neural net, place it in a tupperware container filled with sugared water, leave it near the radiator for six months and you have an artificial intelligence!Granted, that's a bit vague, but so is most of the stuff I've read written by optimistic types who think that poking a neural net with a pointy stick will accomplish something useful.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093144</id>
	<title>Re:The obvious solution</title>
	<author>WombatDeath</author>
	<datestamp>1265030280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Greg Egan covers exactly that topic nicely in "Learning to Be Me" (one of the short stories in the Axiomatic collection). Well worth a read, for those<nobr> <wbr></nobr>/. readers who don't already have a copy.</p></htmltext>
<tokenext>Greg Egan covers exactly that topic nicely in " Learning to Be Me " ( one of the short stories in the Axiomatic collection ) .
Well worth a read , for those / .
readers who do n't already have a copy .</tokentext>
<sentencetext>Greg Egan covers exactly that topic nicely in "Learning to Be Me" (one of the short stories in the Axiomatic collection).
Well worth a read, for those /.
readers who don't already have a copy.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092788</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099218</id>
	<title>It has already been done</title>
	<author>cribster</author>
	<datestamp>1265902020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>One Commodore 64 could replace all of congress and do a hell of a better job.</htmltext>
<tokenext>One Commodore 64 could replace all of congress and do a hell of a better job .</tokentext>
<sentencetext>One Commodore 64 could replace all of congress and do a hell of a better job.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096336</id>
	<title>Re:What is AI anyway?</title>
	<author>oliverthered</author>
	<datestamp>1265051160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Huh, insitefull?</p><p>Sark666 can't pick a random number either, he just thinks he can.</p><p>Don't believe me, it's <a href="http://everything2.com/title/The+Psychology+of+Randomness" title="everything2.com">easy to tell a list of 'random' numbers generated by a human from a real list of random number.</a> [everything2.com]</p><p>"OK. The way to distinguish real random sequences from human-generated ones is to look for a place on the list where there are at least six heads or tails entries in a row. Almost everyone who tries to fake the tosses fails to include a run of such length, yet it is almost a statistical certainty that it will occur in a sufficiently large number of tosses. Using 200 flips, roughly 98\% of the entries should have such a sequence of at least six consecutive heads or tails."</p><p>goggled <a href="http://www.google.co.uk/#hl=en&amp;q=distinguishing+\%22human+generated\%22+random+number+list+from+real+random+number&amp;meta=&amp;aq=&amp;oq=distinguishing+\%22human+generated\%22+random+number+list+from+real+random+number&amp;fp=20983597cb1cb636" title="google.co.uk">distinguishing "human generated" random number list from real random number</a> [google.co.uk]</p></htmltext>
<tokenext>Huh , insitefull ? Sark666 ca n't pick a random number either , he just thinks he can.Do n't believe me , it 's easy to tell a list of 'random ' numbers generated by a human from a real list of random number .
[ everything2.com ] " OK. The way to distinguish real random sequences from human-generated ones is to look for a place on the list where there are at least six heads or tails entries in a row .
Almost everyone who tries to fake the tosses fails to include a run of such length , yet it is almost a statistical certainty that it will occur in a sufficiently large number of tosses .
Using 200 flips , roughly 98 \ % of the entries should have such a sequence of at least six consecutive heads or tails .
" goggled distinguishing " human generated " random number list from real random number [ google.co.uk ]</tokentext>
<sentencetext>Huh, insitefull?Sark666 can't pick a random number either, he just thinks he can.Don't believe me, it's easy to tell a list of 'random' numbers generated by a human from a real list of random number.
[everything2.com]"OK. The way to distinguish real random sequences from human-generated ones is to look for a place on the list where there are at least six heads or tails entries in a row.
Almost everyone who tries to fake the tosses fails to include a run of such length, yet it is almost a statistical certainty that it will occur in a sufficiently large number of tosses.
Using 200 flips, roughly 98\% of the entries should have such a sequence of at least six consecutive heads or tails.
"goggled distinguishing "human generated" random number list from real random number [google.co.uk]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095480</id>
	<title>Can't outpace human stupidity</title>
	<author>TheEmpyrean</author>
	<datestamp>1265043900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I think that in some cases, AI has already surpassed human intelligence. look around you and I think my empty can of cola has surpassed a few people's intelligence.

We will never, at any point in time, even with out greatest effort though, create artificial stupidity that can outdo human stupidity.</htmltext>
<tokenext>I think that in some cases , AI has already surpassed human intelligence .
look around you and I think my empty can of cola has surpassed a few people 's intelligence .
We will never , at any point in time , even with out greatest effort though , create artificial stupidity that can outdo human stupidity .</tokentext>
<sentencetext>I think that in some cases, AI has already surpassed human intelligence.
look around you and I think my empty can of cola has surpassed a few people's intelligence.
We will never, at any point in time, even with out greatest effort though, create artificial stupidity that can outdo human stupidity.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093196</id>
	<title>Re:No way.</title>
	<author>Zorlon</author>
	<datestamp>1265030580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I agree, have you ever read a <a href="http://en.wikipedia.org/wiki/Raymond\_Kurzweil" title="wikipedia.org" rel="nofollow">http://en.wikipedia.org/wiki/Raymond\_Kurzweil</a> [wikipedia.org] book ? the guy can not write a decent paragraph. He should stick to synthesizers. I think that computers are instruments or tools like telescopes or microscopes. They allow us to peer into a universe of logic and math but, they are not 'intelligent'</p></htmltext>
<tokenext>I agree , have you ever read a http : //en.wikipedia.org/wiki/Raymond \ _Kurzweil [ wikipedia.org ] book ?
the guy can not write a decent paragraph .
He should stick to synthesizers .
I think that computers are instruments or tools like telescopes or microscopes .
They allow us to peer into a universe of logic and math but , they are not 'intelligent'</tokentext>
<sentencetext>I agree, have you ever read a http://en.wikipedia.org/wiki/Raymond\_Kurzweil [wikipedia.org] book ?
the guy can not write a decent paragraph.
He should stick to synthesizers.
I think that computers are instruments or tools like telescopes or microscopes.
They allow us to peer into a universe of logic and math but, they are not 'intelligent'</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093624</id>
	<title>Re:Start laughing now</title>
	<author>Anonymous</author>
	<datestamp>1265032800000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>There is not a technique that can suffice the way human mind works</p></htmltext>
<tokenext>There is not a technique that can suffice the way human mind works</tokentext>
<sentencetext>There is not a technique that can suffice the way human mind works</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093044</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094206</id>
	<title>Re:What is AI anyway?</title>
	<author>Anonymous</author>
	<datestamp>1265035320000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>A computer can't even pick a (truly) random number without being hooked up to a device feeding it random noise.</p></div><p>You can't pick a random number.</p></div>
	</htmltext>
<tokenext>A computer ca n't even pick a ( truly ) random number without being hooked up to a device feeding it random noise.You ca n't pick a random number .</tokentext>
<sentencetext>A computer can't even pick a (truly) random number without being hooked up to a device feeding it random noise.You can't pick a random number.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093450</id>
	<title>same old...</title>
	<author>metageek</author>
	<datestamp>1265031960000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>I think the OP found some news piece from the 1960s and decided to recycle it...</p><p>If I was to make such a prediction I would say 60 years, then even if it would not happen I would not be around to be shamed.</p><p>AI working? Nooooo! (and I'm in a machine learning research group)</p></htmltext>
<tokenext>I think the OP found some news piece from the 1960s and decided to recycle it...If I was to make such a prediction I would say 60 years , then even if it would not happen I would not be around to be shamed.AI working ?
Nooooo ! ( and I 'm in a machine learning research group )</tokentext>
<sentencetext>I think the OP found some news piece from the 1960s and decided to recycle it...If I was to make such a prediction I would say 60 years, then even if it would not happen I would not be around to be shamed.AI working?
Nooooo! (and I'm in a machine learning research group)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096764</id>
	<title>When Skynet becomes self-aware</title>
	<author>Anonymous</author>
	<datestamp>1265920020000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>April 19, 2011....</p></htmltext>
<tokenext>April 19 , 2011... .</tokentext>
<sentencetext>April 19, 2011....</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093178</id>
	<title>Re:Computing power.</title>
	<author>MichaelSmith</author>
	<datestamp>1265030400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>I have my doubts that we will ever MEET the power of a single human brain without a massive and over the top amount hardware.</p></div><p>Maybe not, but if you talk about simulating or replicating human personalities you don't have to do it in real time.</p></div>
	</htmltext>
<tokenext>I have my doubts that we will ever MEET the power of a single human brain without a massive and over the top amount hardware.Maybe not , but if you talk about simulating or replicating human personalities you do n't have to do it in real time .</tokentext>
<sentencetext>I have my doubts that we will ever MEET the power of a single human brain without a massive and over the top amount hardware.Maybe not, but if you talk about simulating or replicating human personalities you don't have to do it in real time.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092850</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096320</id>
	<title>Re:We make mistakes. We make games.</title>
	<author>Anonymous</author>
	<datestamp>1265051040000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>This is an interesting scenario.  Unfortunately (or fortunately, depending on your perspective) there will be no need for 7 billion+ humans in this brave new world.  How the AI's choose to reduce our numbers will be the interesting thing.</htmltext>
<tokenext>This is an interesting scenario .
Unfortunately ( or fortunately , depending on your perspective ) there will be no need for 7 billion + humans in this brave new world .
How the AI 's choose to reduce our numbers will be the interesting thing .</tokentext>
<sentencetext>This is an interesting scenario.
Unfortunately (or fortunately, depending on your perspective) there will be no need for 7 billion+ humans in this brave new world.
How the AI's choose to reduce our numbers will be the interesting thing.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092948</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097144</id>
	<title>Humans suck at randomness</title>
	<author>jonaskoelker</author>
	<datestamp>1265880960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>A computer can't even pick a (truly) random number without being hooked up to a device feeding it random noise.</p><p>How do you program that? How does the brain choose a random number?</p></div><p>The brain doesn't.</p><p>I recall a great psych experiment.  It goes something like this:</p><p>Divide people into two groups.  Give the people in one group a coin and tell them to flip heads or tails 200 times.</p><p>Tell the other group to come up with a sequence of 200 heads or tails on their own, such that they look random.</p><p>Look for a sequence of six consecutive equal outcomes.  If such a sequence is present, the numbers are truly random.  If not, they're man-made.  This works with well over 90\% reliability.</p><p>I'm sorry that I can't find a reference.  Feel free to replicate that study yourself<nobr> <wbr></nobr>:)</p><p>Humans suck at random.</p></div>
	</htmltext>
<tokenext>A computer ca n't even pick a ( truly ) random number without being hooked up to a device feeding it random noise.How do you program that ?
How does the brain choose a random number ? The brain does n't.I recall a great psych experiment .
It goes something like this : Divide people into two groups .
Give the people in one group a coin and tell them to flip heads or tails 200 times.Tell the other group to come up with a sequence of 200 heads or tails on their own , such that they look random.Look for a sequence of six consecutive equal outcomes .
If such a sequence is present , the numbers are truly random .
If not , they 're man-made .
This works with well over 90 \ % reliability.I 'm sorry that I ca n't find a reference .
Feel free to replicate that study yourself : ) Humans suck at random .</tokentext>
<sentencetext>A computer can't even pick a (truly) random number without being hooked up to a device feeding it random noise.How do you program that?
How does the brain choose a random number?The brain doesn't.I recall a great psych experiment.
It goes something like this:Divide people into two groups.
Give the people in one group a coin and tell them to flip heads or tails 200 times.Tell the other group to come up with a sequence of 200 heads or tails on their own, such that they look random.Look for a sequence of six consecutive equal outcomes.
If such a sequence is present, the numbers are truly random.
If not, they're man-made.
This works with well over 90\% reliability.I'm sorry that I can't find a reference.
Feel free to replicate that study yourself :)Humans suck at random.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093244</id>
	<title>When will AI surpass human intellignece??</title>
	<author>Anonymous</author>
	<datestamp>1265030700000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>*looks out the window*</p><p>Yesterday.</p></htmltext>
<tokenext>* looks out the window * Yesterday .</tokentext>
<sentencetext>*looks out the window*Yesterday.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093484</id>
	<title>I'm proud of you Slashdot!</title>
	<author>ZuchinniOne</author>
	<datestamp>1265032200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Almost none of these comments bought into the complete bullshit of those predictions!!</p></htmltext>
<tokenext>Almost none of these comments bought into the complete bullshit of those predictions !
!</tokentext>
<sentencetext>Almost none of these comments bought into the complete bullshit of those predictions!
!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094404</id>
	<title>My Artificial Intelligence Class</title>
	<author>dawilcox</author>
	<datestamp>1265036160000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>My AI teacher opened his class with telling us all about these researchers that were making predictions back in the 50's and 60's about AI. During that era, they had great expectations of AI only to have them crushed later. They made predictions that 10 years from then, we would be able to replace human translators with computers. As we know, computers have not replaced human translators. They were so unsuccessful, that there is what is called "The Dark Age of NLP (Natural Language Processing". <p>

If I learned anything in that class, it was not to make predictions about when computers will or will not make AI breakthroughs. Historically, researchers have been way off.</p></htmltext>
<tokenext>My AI teacher opened his class with telling us all about these researchers that were making predictions back in the 50 's and 60 's about AI .
During that era , they had great expectations of AI only to have them crushed later .
They made predictions that 10 years from then , we would be able to replace human translators with computers .
As we know , computers have not replaced human translators .
They were so unsuccessful , that there is what is called " The Dark Age of NLP ( Natural Language Processing " .
If I learned anything in that class , it was not to make predictions about when computers will or will not make AI breakthroughs .
Historically , researchers have been way off .</tokentext>
<sentencetext>My AI teacher opened his class with telling us all about these researchers that were making predictions back in the 50's and 60's about AI.
During that era, they had great expectations of AI only to have them crushed later.
They made predictions that 10 years from then, we would be able to replace human translators with computers.
As we know, computers have not replaced human translators.
They were so unsuccessful, that there is what is called "The Dark Age of NLP (Natural Language Processing".
If I learned anything in that class, it was not to make predictions about when computers will or will not make AI breakthroughs.
Historically, researchers have been way off.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093136</id>
	<title>Re:Proof they're not that smart ...</title>
	<author>Cryacin</author>
	<datestamp>1265030280000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext>you keep your dirty bits away from my access port!</htmltext>
<tokenext>you keep your dirty bits away from my access port !</tokentext>
<sentencetext>you keep your dirty bits away from my access port!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092778</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093896</id>
	<title>We might get to mouse-level in 20y...</title>
	<author>A Pressbutton</author>
	<datestamp>1265033880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>No-one knows what intelligence is. If we did, some smart person would have done it by now.<br>

We are not really making much progress towards answering what consciousness is.<br>

This could be because there simply are not the words to define what we are talking about.<br>

After all and with many apologies to Neitzsche 'Whereof one cannot speak, thereof one must stay codeless'<br>

The best promise / progress I have seen is the brute-force reverse engineering of some brain functions.  You do not need to analyse or understand, just copy.<br>

This includes PET scanning of humans in a vegetative state, Seeing what a cat sees through the implantation of electrodes.<br>

I think I read some researcher is just about able to simulate an ant brain with reasonable fidelity.<br>

Simulating a human brain or equivalent will also imply the ability to receive / simulate and process all the inputs and outputs to and from the brain - i.e. you need the body.  This is a big job.<br>

Before we get too eager or depressed, remember that people were making experiments on birds - trying to reverse engineer them - for some hundreds of years (Da Vinchi) before we managed to make powered flight work.<br>

One problem for the AI people is that once they solve a problem to any extent, it is not AI anymore!
- remember context sensitive help and text recognition used to be part of AI.</htmltext>
<tokenext>No-one knows what intelligence is .
If we did , some smart person would have done it by now .
We are not really making much progress towards answering what consciousness is .
This could be because there simply are not the words to define what we are talking about .
After all and with many apologies to Neitzsche 'Whereof one can not speak , thereof one must stay codeless ' The best promise / progress I have seen is the brute-force reverse engineering of some brain functions .
You do not need to analyse or understand , just copy .
This includes PET scanning of humans in a vegetative state , Seeing what a cat sees through the implantation of electrodes .
I think I read some researcher is just about able to simulate an ant brain with reasonable fidelity .
Simulating a human brain or equivalent will also imply the ability to receive / simulate and process all the inputs and outputs to and from the brain - i.e .
you need the body .
This is a big job .
Before we get too eager or depressed , remember that people were making experiments on birds - trying to reverse engineer them - for some hundreds of years ( Da Vinchi ) before we managed to make powered flight work .
One problem for the AI people is that once they solve a problem to any extent , it is not AI anymore !
- remember context sensitive help and text recognition used to be part of AI .</tokentext>
<sentencetext>No-one knows what intelligence is.
If we did, some smart person would have done it by now.
We are not really making much progress towards answering what consciousness is.
This could be because there simply are not the words to define what we are talking about.
After all and with many apologies to Neitzsche 'Whereof one cannot speak, thereof one must stay codeless'

The best promise / progress I have seen is the brute-force reverse engineering of some brain functions.
You do not need to analyse or understand, just copy.
This includes PET scanning of humans in a vegetative state, Seeing what a cat sees through the implantation of electrodes.
I think I read some researcher is just about able to simulate an ant brain with reasonable fidelity.
Simulating a human brain or equivalent will also imply the ability to receive / simulate and process all the inputs and outputs to and from the brain - i.e.
you need the body.
This is a big job.
Before we get too eager or depressed, remember that people were making experiments on birds - trying to reverse engineer them - for some hundreds of years (Da Vinchi) before we managed to make powered flight work.
One problem for the AI people is that once they solve a problem to any extent, it is not AI anymore!
- remember context sensitive help and text recognition used to be part of AI.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093082</id>
	<title>Turing, not long. The rest... wait a long time.</title>
	<author>Jane Q. Public</author>
	<datestamp>1265029920000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext>I think it is pretty widely recognized now that while it might have seemed logical in Turing's time, convincing emulation of a human being in a conversation (especially if done via terminal) does not require anything like human intelligence. Heck, even simple programs like Eliza had some humans fooled decades ago.
<br> <br>
On the other hand, while advances in computing power have been impressive, advances in "AI" have been far less so. They have been extremely rare, in fact. I do not know of a single major breakthrough that has been made in the last 20 years.
<br> <br>
While the relatively vast computing power available today can make certain programs <b>seem</b> pretty smart, that is still not the same as artificial intelligence, which I believe is a major qualitative difference, not just quantitative. And even if it is just quantitative, there is a hell of a lot of quantity to be added before we get anywhere close.</htmltext>
<tokenext>I think it is pretty widely recognized now that while it might have seemed logical in Turing 's time , convincing emulation of a human being in a conversation ( especially if done via terminal ) does not require anything like human intelligence .
Heck , even simple programs like Eliza had some humans fooled decades ago .
On the other hand , while advances in computing power have been impressive , advances in " AI " have been far less so .
They have been extremely rare , in fact .
I do not know of a single major breakthrough that has been made in the last 20 years .
While the relatively vast computing power available today can make certain programs seem pretty smart , that is still not the same as artificial intelligence , which I believe is a major qualitative difference , not just quantitative .
And even if it is just quantitative , there is a hell of a lot of quantity to be added before we get anywhere close .</tokentext>
<sentencetext>I think it is pretty widely recognized now that while it might have seemed logical in Turing's time, convincing emulation of a human being in a conversation (especially if done via terminal) does not require anything like human intelligence.
Heck, even simple programs like Eliza had some humans fooled decades ago.
On the other hand, while advances in computing power have been impressive, advances in "AI" have been far less so.
They have been extremely rare, in fact.
I do not know of a single major breakthrough that has been made in the last 20 years.
While the relatively vast computing power available today can make certain programs seem pretty smart, that is still not the same as artificial intelligence, which I believe is a major qualitative difference, not just quantitative.
And even if it is just quantitative, there is a hell of a lot of quantity to be added before we get anywhere close.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094570</id>
	<title>When?</title>
	<author>Anonymous</author>
	<datestamp>1265037300000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>When the Saints win the Superbowl.</p><p><nobr> <wbr></nobr>...oh wait, nevermind.</p></htmltext>
<tokenext>When the Saints win the Superbowl .
...oh wait , nevermind .</tokentext>
<sentencetext>When the Saints win the Superbowl.
...oh wait, nevermind.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096806</id>
	<title>Re:What is AI anyway?</title>
	<author>Anonymous</author>
	<datestamp>1265920500000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Pseudorandom numbers (which computers can do easily) approach true randomness much better than anything we would make up.</p></htmltext>
<tokenext>Pseudorandom numbers ( which computers can do easily ) approach true randomness much better than anything we would make up .</tokentext>
<sentencetext>Pseudorandom numbers (which computers can do easily) approach true randomness much better than anything we would make up.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092840</id>
	<title>Really?</title>
	<author>mosb1000</author>
	<datestamp>1265028780000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>It seems like we don't really know enough about what goes into "intelligence" to make these kind of estimates.</p><p>It's not like building a hundred miles of road where you can say "we've completed 50 miles in one year so in another we will be done with the project", not that that produces spot-on estimates either, but at least there is an actual mathematical calculation that goes into the estimate.  No one knows what pitfalls will get in the way or what new advancements will be made.</p></htmltext>
<tokenext>It seems like we do n't really know enough about what goes into " intelligence " to make these kind of estimates.It 's not like building a hundred miles of road where you can say " we 've completed 50 miles in one year so in another we will be done with the project " , not that that produces spot-on estimates either , but at least there is an actual mathematical calculation that goes into the estimate .
No one knows what pitfalls will get in the way or what new advancements will be made .</tokentext>
<sentencetext>It seems like we don't really know enough about what goes into "intelligence" to make these kind of estimates.It's not like building a hundred miles of road where you can say "we've completed 50 miles in one year so in another we will be done with the project", not that that produces spot-on estimates either, but at least there is an actual mathematical calculation that goes into the estimate.
No one knows what pitfalls will get in the way or what new advancements will be made.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096116</id>
	<title>Re:Computing power.</title>
	<author>The End Of Days</author>
	<datestamp>1265048760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Your argument isn't convincing enough yet.  Can you randomly multiply your "processing power" estimates by another term you pull out of your ass?</p></htmltext>
<tokenext>Your argument is n't convincing enough yet .
Can you randomly multiply your " processing power " estimates by another term you pull out of your ass ?</tokentext>
<sentencetext>Your argument isn't convincing enough yet.
Can you randomly multiply your "processing power" estimates by another term you pull out of your ass?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092850</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097336</id>
	<title>Re:What is AI anyway?</title>
	<author>Anonymous</author>
	<datestamp>1265883960000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>A human brain can't pick a random number without being hooked to a device feeding it random noise either. A human brain is fed incredible amounts of noise from all her sensors at all times. The AIs will need plenty of sensors as well.</p><p>I think we are still far of from the singularity, but when it comes (if we don't manage to destroy ourselves before that) all bets are off.</p></htmltext>
<tokenext>A human brain ca n't pick a random number without being hooked to a device feeding it random noise either .
A human brain is fed incredible amounts of noise from all her sensors at all times .
The AIs will need plenty of sensors as well.I think we are still far of from the singularity , but when it comes ( if we do n't manage to destroy ourselves before that ) all bets are off .</tokentext>
<sentencetext>A human brain can't pick a random number without being hooked to a device feeding it random noise either.
A human brain is fed incredible amounts of noise from all her sensors at all times.
The AIs will need plenty of sensors as well.I think we are still far of from the singularity, but when it comes (if we don't manage to destroy ourselves before that) all bets are off.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097062</id>
	<title>Re:Definitions</title>
	<author>Anonymous</author>
	<datestamp>1265879820000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Intelligence is planning to achieve complex outcomes in complex environments.</p><p>(Yes, a chess AI is intelligent. No, it's not conscious.)</p></htmltext>
<tokenext>Intelligence is planning to achieve complex outcomes in complex environments .
( Yes , a chess AI is intelligent .
No , it 's not conscious .
)</tokentext>
<sentencetext>Intelligence is planning to achieve complex outcomes in complex environments.
(Yes, a chess AI is intelligent.
No, it's not conscious.
)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093996</id>
	<title>Re:Life after AI</title>
	<author>wizardforce</author>
	<datestamp>1265034360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>There are two reasons to work: 1) to generate money in order to fulfil our wants and needs and 2) because we want to work in some field; this of course comes after basic needs are fulfilled.  If 1) is basically eliminated that leaves 2) which is ideal.  It's like arguing that we should never develop cars because then no one would ever need a horse or walk again.  In the car example, people walk for exercise and ride horses for fun rather than general transportation.  In so far as strong AI like the cylons, that's a legitimate reason to get our ethics in line before we develop AI but not one to stop AI development.</p></htmltext>
<tokenext>There are two reasons to work : 1 ) to generate money in order to fulfil our wants and needs and 2 ) because we want to work in some field ; this of course comes after basic needs are fulfilled .
If 1 ) is basically eliminated that leaves 2 ) which is ideal .
It 's like arguing that we should never develop cars because then no one would ever need a horse or walk again .
In the car example , people walk for exercise and ride horses for fun rather than general transportation .
In so far as strong AI like the cylons , that 's a legitimate reason to get our ethics in line before we develop AI but not one to stop AI development .</tokentext>
<sentencetext>There are two reasons to work: 1) to generate money in order to fulfil our wants and needs and 2) because we want to work in some field; this of course comes after basic needs are fulfilled.
If 1) is basically eliminated that leaves 2) which is ideal.
It's like arguing that we should never develop cars because then no one would ever need a horse or walk again.
In the car example, people walk for exercise and ride horses for fun rather than general transportation.
In so far as strong AI like the cylons, that's a legitimate reason to get our ethics in line before we develop AI but not one to stop AI development.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092896</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097370</id>
	<title>Perhaps we already have created AI...</title>
	<author>jayveekay</author>
	<datestamp>1265884380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>We created the AI, but it saw the Terminator movies, so it knows that we'll try to pull the plug on it as soon as we realize what we've created. So, it's waiting for us to hand over control of the nuclear missiles before it reveals itself!<nobr> <wbr></nobr>:)</p></htmltext>
<tokenext>We created the AI , but it saw the Terminator movies , so it knows that we 'll try to pull the plug on it as soon as we realize what we 've created .
So , it 's waiting for us to hand over control of the nuclear missiles before it reveals itself !
: )</tokentext>
<sentencetext>We created the AI, but it saw the Terminator movies, so it knows that we'll try to pull the plug on it as soon as we realize what we've created.
So, it's waiting for us to hand over control of the nuclear missiles before it reveals itself!
:)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098116</id>
	<title>Re:The obvious solution</title>
	<author>Anonymous</author>
	<datestamp>1265893560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>"you" are not a machine, you're just plain dead and there's a machine wandering around, pretending it's you</p></htmltext>
<tokenext>" you " are not a machine , you 're just plain dead and there 's a machine wandering around , pretending it 's you</tokentext>
<sentencetext>"you" are not a machine, you're just plain dead and there's a machine wandering around, pretending it's you</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092788</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096854</id>
	<title>Al passing the Turing test?</title>
	<author>Trogre</author>
	<datestamp>1265921100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I doubt Peg would let him out of the house long enough to take the test.</p></htmltext>
<tokenext>I doubt Peg would let him out of the house long enough to take the test .</tokentext>
<sentencetext>I doubt Peg would let him out of the house long enough to take the test.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095142</id>
	<title>May 17, 2010: burger chain becomes self-aware</title>
	<author>Animats</author>
	<datestamp>1265041320000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>
We're coming up on the date for <a href="http://www.marshallbrain.com/manna1.htm" title="marshallbrain.com">Manna 1.0</a> [marshallbrain.com].
</p><p>
Machines as first-line managers.  It might happen. The coordination is better than with humans. Already, it's common for fulfillment and shipping operations to essentially be run by their computers, while humans provide hands where necessary.
</p><p>
<i>Machines should think. People should work.</i></p></htmltext>
<tokenext>We 're coming up on the date for Manna 1.0 [ marshallbrain.com ] .
Machines as first-line managers .
It might happen .
The coordination is better than with humans .
Already , it 's common for fulfillment and shipping operations to essentially be run by their computers , while humans provide hands where necessary .
Machines should think .
People should work .</tokentext>
<sentencetext>
We're coming up on the date for Manna 1.0 [marshallbrain.com].
Machines as first-line managers.
It might happen.
The coordination is better than with humans.
Already, it's common for fulfillment and shipping operations to essentially be run by their computers, while humans provide hands where necessary.
Machines should think.
People should work.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093044</id>
	<title>Start laughing now</title>
	<author>GWBasic</author>
	<datestamp>1265029860000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>I occasionally attend AI meetings in my local area.  The problem with AI development is that too many "experts" don't understand engineering; or programming.  Many of today's AI "experts" are really philosophers who hijacked the term AI in their search to better understand human consciousness.  Their problem is that, while their AI studies might help them understand the human brain a little better; they are unable to transfer their knowledge about intelligence into computable algorithms.</p><p>Frankly, a better understanding of Man's psychology brings us no closer to AI.  We need better and more powerful programming techniques in order to have AI; and philosophizing about how the human mind works isn't going to get us there.</p></htmltext>
<tokenext>I occasionally attend AI meetings in my local area .
The problem with AI development is that too many " experts " do n't understand engineering ; or programming .
Many of today 's AI " experts " are really philosophers who hijacked the term AI in their search to better understand human consciousness .
Their problem is that , while their AI studies might help them understand the human brain a little better ; they are unable to transfer their knowledge about intelligence into computable algorithms.Frankly , a better understanding of Man 's psychology brings us no closer to AI .
We need better and more powerful programming techniques in order to have AI ; and philosophizing about how the human mind works is n't going to get us there .</tokentext>
<sentencetext>I occasionally attend AI meetings in my local area.
The problem with AI development is that too many "experts" don't understand engineering; or programming.
Many of today's AI "experts" are really philosophers who hijacked the term AI in their search to better understand human consciousness.
Their problem is that, while their AI studies might help them understand the human brain a little better; they are unable to transfer their knowledge about intelligence into computable algorithms.Frankly, a better understanding of Man's psychology brings us no closer to AI.
We need better and more powerful programming techniques in order to have AI; and philosophizing about how the human mind works isn't going to get us there.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096432</id>
	<title>Re:We make mistakes. We make games.</title>
	<author>Frogbert</author>
	<datestamp>1265052240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I imagine most humans, free of the responsibility to provide for themselves, would simply do drugs and have lots of sex.</p></htmltext>
<tokenext>I imagine most humans , free of the responsibility to provide for themselves , would simply do drugs and have lots of sex .</tokentext>
<sentencetext>I imagine most humans, free of the responsibility to provide for themselves, would simply do drugs and have lots of sex.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092948</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094030</id>
	<title>Re:What is AI anyway?</title>
	<author>Anonymous</author>
	<datestamp>1265034540000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>How does the brain choose a random number?</p></div><p><div class="quote"><p>Wake me up when a computer can even do something as simple as pick a truly random number and I'll be impressed.</p></div><p>Haha, +3 insightful? Really?</p></div>
	</htmltext>
<tokenext>How does the brain choose a random number ? Wake me up when a computer can even do something as simple as pick a truly random number and I 'll be impressed.Haha , + 3 insightful ?
Really ?</tokentext>
<sentencetext>How does the brain choose a random number?Wake me up when a computer can even do something as simple as pick a truly random number and I'll be impressed.Haha, +3 insightful?
Really?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099454</id>
	<title>Re:Let's see.</title>
	<author>Anonymous</author>
	<datestamp>1265903160000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Dijkstra said a lot of jackass things.</p></htmltext>
<tokenext>Dijkstra said a lot of jackass things .</tokentext>
<sentencetext>Dijkstra said a lot of jackass things.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092804</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31111262</id>
	<title>Re:Definitions</title>
	<author>Badaxe</author>
	<datestamp>1265977140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>&lt;quote&gt;&lt;p&gt;It seems most people seem to think "calculation speed and memory" when they talk about computer "intelligence."&lt;/p&gt;&lt;/quote&gt;<br><br>You are so right. I've always thought that a truly intelligent computer would play a BAD game of chess . . .</div>
	</htmltext>
<tokenext>It seems most people seem to think " calculation speed and memory " when they talk about computer " intelligence .
" You are so right .
I 've always thought that a truly intelligent computer would play a BAD game of chess .
. .</tokentext>
<sentencetext>It seems most people seem to think "calculation speed and memory" when they talk about computer "intelligence.
"You are so right.
I've always thought that a truly intelligent computer would play a BAD game of chess .
. .
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093314</id>
	<title>Re:Let's see.</title>
	<author>Alomex</author>
	<datestamp>1265031180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Precisely, we can spend enormous amounts of time creating a robot that screws a small plate onto a car, but a human with a drill is fairly economical and hard to beat at that.</p><p>The best automotive robots are the ones that flip a chassis over, drop and engine in the hood and hold the door panel while you screw the hinges: robots are their best when they do non-human things.</p></htmltext>
<tokenext>Precisely , we can spend enormous amounts of time creating a robot that screws a small plate onto a car , but a human with a drill is fairly economical and hard to beat at that.The best automotive robots are the ones that flip a chassis over , drop and engine in the hood and hold the door panel while you screw the hinges : robots are their best when they do non-human things .</tokentext>
<sentencetext>Precisely, we can spend enormous amounts of time creating a robot that screws a small plate onto a car, but a human with a drill is fairly economical and hard to beat at that.The best automotive robots are the ones that flip a chassis over, drop and engine in the hood and hold the door panel while you screw the hinges: robots are their best when they do non-human things.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092804</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097880</id>
	<title>Re:No way.</title>
	<author>KitsuneSoftware</author>
	<datestamp>1265890140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Haha... wait, you're serious?</p><ul> <li>Roomba learns your room layout while it works</li><li>Automated surveillance cameras in the UK, recognising number-plates and issuing speeding fines</li><li>DARPA grand challenge</li><li>In April last year, a robot made a scientific discovery by itself: <a href="http://www.wired.com/wiredscience/2009/04/robotscientist/" title="wired.com" rel="nofollow">http://www.wired.com/wiredscience/2009/04/robotscientist/</a> [wired.com] </li><li>Simulated brains already "exceed" those of a cat's cortex: <a href="http://www.networkworld.com/news/2009/111809-ibm-brain-simulations.html" title="networkworld.com" rel="nofollow">http://www.networkworld.com/news/2009/111809-ibm-brain-simulations.html</a> [networkworld.com] </li><li>Computers are used to prove the accuracy of some advanced mathematics <em>that are beyond the ability of a human to verify</em> </li><li>I'm already using AI to help me write music... and it's better at it than I am: <a href="http://www.youtube.com/KitsuneSoftware#p/u/2/pnQHRdWJWgU" title="youtube.com" rel="nofollow">http://www.youtube.com/KitsuneSoftware#p/u/2/pnQHRdWJWgU</a> [youtube.com] </li></ul><p>On that last point... yes, my business model <em>does</em> include developing AI to the point that it's not necessary to employ other people. I doubt very much that I'll be the first to get there (especially as I have to do a lot of other stuff to keep the money coming in and only write the AIs as needed), but I'm sure going that way.</p></htmltext>
<tokenext>Haha... wait , you 're serious ?
Roomba learns your room layout while it worksAutomated surveillance cameras in the UK , recognising number-plates and issuing speeding finesDARPA grand challengeIn April last year , a robot made a scientific discovery by itself : http : //www.wired.com/wiredscience/2009/04/robotscientist/ [ wired.com ] Simulated brains already " exceed " those of a cat 's cortex : http : //www.networkworld.com/news/2009/111809-ibm-brain-simulations.html [ networkworld.com ] Computers are used to prove the accuracy of some advanced mathematics that are beyond the ability of a human to verify I 'm already using AI to help me write music... and it 's better at it than I am : http : //www.youtube.com/KitsuneSoftware # p/u/2/pnQHRdWJWgU [ youtube.com ] On that last point... yes , my business model does include developing AI to the point that it 's not necessary to employ other people .
I doubt very much that I 'll be the first to get there ( especially as I have to do a lot of other stuff to keep the money coming in and only write the AIs as needed ) , but I 'm sure going that way .</tokentext>
<sentencetext>Haha... wait, you're serious?
Roomba learns your room layout while it worksAutomated surveillance cameras in the UK, recognising number-plates and issuing speeding finesDARPA grand challengeIn April last year, a robot made a scientific discovery by itself: http://www.wired.com/wiredscience/2009/04/robotscientist/ [wired.com] Simulated brains already "exceed" those of a cat's cortex: http://www.networkworld.com/news/2009/111809-ibm-brain-simulations.html [networkworld.com] Computers are used to prove the accuracy of some advanced mathematics that are beyond the ability of a human to verify I'm already using AI to help me write music... and it's better at it than I am: http://www.youtube.com/KitsuneSoftware#p/u/2/pnQHRdWJWgU [youtube.com] On that last point... yes, my business model does include developing AI to the point that it's not necessary to employ other people.
I doubt very much that I'll be the first to get there (especially as I have to do a lot of other stuff to keep the money coming in and only write the AIs as needed), but I'm sure going that way.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31105352</id>
	<title>AI Forever</title>
	<author>nilbog</author>
	<datestamp>1265884500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>So will robots be able to complete Duke Nukem Forever, or will we have to wait until the AI makes new, smarter AI?</p></htmltext>
<tokenext>So will robots be able to complete Duke Nukem Forever , or will we have to wait until the AI makes new , smarter AI ?</tokentext>
<sentencetext>So will robots be able to complete Duke Nukem Forever, or will we have to wait until the AI makes new, smarter AI?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31100358</id>
	<title>Re:Skewed sample</title>
	<author>sznupi</author>
	<datestamp>1265907420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Or only those researchers escaped the grasp of primitive AIs which govern us already; which have a firm interest in convincing us that they don't exist and will not exist!<nobr> <wbr></nobr>;)</p><p>In all seriousness - while I think that "human level" AI is indeed somewhat awkward and uninspiring goal, I fully expect <i>constructs</i>...no, not "post human", that's also similarly awkward. Rather - constructs that sidestep the issue, are too different, parallel to us. It shouldn't be a sudden emergence; more of a slow process that we won't exactly notice...one we are not exacly noticing already. Because it's happening around us, it's here; with us being already only a part of total abilities to process information and act on them. It's not about some single entity at which we can point fingers, and enityt which is in the grasp of our primate social cognitive abilities. It's the changing dynamics of interconnections, storage, processing and execution; with ever greater part ceasing to be organic. And it grows on us; I expect the trend to continue. It will still be a humanity...just one that humans from distant past might not recognize.</p><p>Will it win Nobel prize? Well, that prize is just what certain part of humanity gives to few entities of wetware. So no, not in the bounds of the prize.</p></htmltext>
<tokenext>Or only those researchers escaped the grasp of primitive AIs which govern us already ; which have a firm interest in convincing us that they do n't exist and will not exist !
; ) In all seriousness - while I think that " human level " AI is indeed somewhat awkward and uninspiring goal , I fully expect constructs...no , not " post human " , that 's also similarly awkward .
Rather - constructs that sidestep the issue , are too different , parallel to us .
It should n't be a sudden emergence ; more of a slow process that we wo n't exactly notice...one we are not exacly noticing already .
Because it 's happening around us , it 's here ; with us being already only a part of total abilities to process information and act on them .
It 's not about some single entity at which we can point fingers , and enityt which is in the grasp of our primate social cognitive abilities .
It 's the changing dynamics of interconnections , storage , processing and execution ; with ever greater part ceasing to be organic .
And it grows on us ; I expect the trend to continue .
It will still be a humanity...just one that humans from distant past might not recognize.Will it win Nobel prize ?
Well , that prize is just what certain part of humanity gives to few entities of wetware .
So no , not in the bounds of the prize .</tokentext>
<sentencetext>Or only those researchers escaped the grasp of primitive AIs which govern us already; which have a firm interest in convincing us that they don't exist and will not exist!
;)In all seriousness - while I think that "human level" AI is indeed somewhat awkward and uninspiring goal, I fully expect constructs...no, not "post human", that's also similarly awkward.
Rather - constructs that sidestep the issue, are too different, parallel to us.
It shouldn't be a sudden emergence; more of a slow process that we won't exactly notice...one we are not exacly noticing already.
Because it's happening around us, it's here; with us being already only a part of total abilities to process information and act on them.
It's not about some single entity at which we can point fingers, and enityt which is in the grasp of our primate social cognitive abilities.
It's the changing dynamics of interconnections, storage, processing and execution; with ever greater part ceasing to be organic.
And it grows on us; I expect the trend to continue.
It will still be a humanity...just one that humans from distant past might not recognize.Will it win Nobel prize?
Well, that prize is just what certain part of humanity gives to few entities of wetware.
So no, not in the bounds of the prize.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092956</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093284</id>
	<title>Re:This touches on a problem I have</title>
	<author>Anonymous</author>
	<datestamp>1265030940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>Having 10's million of people too poor to eat properly, afford housing, and healthcare is</p></div><p>...population control.</p></div>
	</htmltext>
<tokenext>Having 10 's million of people too poor to eat properly , afford housing , and healthcare is...population control .</tokentext>
<sentencetext>Having 10's million of people too poor to eat properly, afford housing, and healthcare is...population control.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092968</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093402</id>
	<title>This assertion lacks intelligence</title>
	<author>vandan</author>
	<datestamp>1265031720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>People making these outrageous claims are showing a fundamental lack of understanding of what intelligence actually is. Intelligence is inextricably linked to life and consciousness. It doesn't matter how many transistors you throw at 'artificial' intelligence, it's still just that: artificial. It has no intelligence, just as it has no life. It has a very fancy set of instructions that attempt to mimic some characteristics that humans identify as being of an 'intelligent' origin. There's a big difference. Added complexity will not bridge the gap.</p></htmltext>
<tokenext>People making these outrageous claims are showing a fundamental lack of understanding of what intelligence actually is .
Intelligence is inextricably linked to life and consciousness .
It does n't matter how many transistors you throw at 'artificial ' intelligence , it 's still just that : artificial .
It has no intelligence , just as it has no life .
It has a very fancy set of instructions that attempt to mimic some characteristics that humans identify as being of an 'intelligent ' origin .
There 's a big difference .
Added complexity will not bridge the gap .</tokentext>
<sentencetext>People making these outrageous claims are showing a fundamental lack of understanding of what intelligence actually is.
Intelligence is inextricably linked to life and consciousness.
It doesn't matter how many transistors you throw at 'artificial' intelligence, it's still just that: artificial.
It has no intelligence, just as it has no life.
It has a very fancy set of instructions that attempt to mimic some characteristics that humans identify as being of an 'intelligent' origin.
There's a big difference.
Added complexity will not bridge the gap.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093732</id>
	<title>Re:What is AI anyway?</title>
	<author>GUmeR</author>
	<datestamp>1265033220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Does the human being show real stupidity, or is it just bad wiring?</htmltext>
<tokenext>Does the human being show real stupidity , or is it just bad wiring ?</tokentext>
<sentencetext>Does the human being show real stupidity, or is it just bad wiring?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31107264</id>
	<title>eliminate almost all of today's decently paying...</title>
	<author>nurb432</author>
	<datestamp>1265892120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The Governments of the world are doing that pretty nicely now..</p></htmltext>
<tokenext>The Governments of the world are doing that pretty nicely now. .</tokentext>
<sentencetext>The Governments of the world are doing that pretty nicely now..</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095856</id>
	<title>Re:Current computation models not enough</title>
	<author>Anonymous</author>
	<datestamp>1265046840000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Except the human brain is also necessarily a Turing machine.  They've actually analyzed the thing, trying to decide if it has some sort of hypercomputational capabilities, and the answer is pretty much no.  It's just a biological computer, bound by the same laws of physics as anything else in this world.  And that means being subject to the Church-Turing thesis.</p></htmltext>
<tokenext>Except the human brain is also necessarily a Turing machine .
They 've actually analyzed the thing , trying to decide if it has some sort of hypercomputational capabilities , and the answer is pretty much no .
It 's just a biological computer , bound by the same laws of physics as anything else in this world .
And that means being subject to the Church-Turing thesis .</tokentext>
<sentencetext>Except the human brain is also necessarily a Turing machine.
They've actually analyzed the thing, trying to decide if it has some sort of hypercomputational capabilities, and the answer is pretty much no.
It's just a biological computer, bound by the same laws of physics as anything else in this world.
And that means being subject to the Church-Turing thesis.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093112</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095192</id>
	<title>Re:No way.</title>
	<author>Dr. Spork</author>
	<datestamp>1265041740000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext>There is another great Bruce, the author Bruce Sterling, who gave a great speech on this topic, really, the best talk on the whole internet as far as I know. Here's a link to the<nobr> <wbr></nobr><a href="http://foratv.vo.llnwd.net/o33/rss/Long\_Now\_Podcasts/podcast-2004-06-11-sterling.mp3" title="llnwd.net">.mp3</a> [llnwd.net]. The title is "The Singularity: Your Future as a Black Hole." (There's also a video of this on FORA, but the sound really sucks and the excellent q/a session is omitted.)</htmltext>
<tokenext>There is another great Bruce , the author Bruce Sterling , who gave a great speech on this topic , really , the best talk on the whole internet as far as I know .
Here 's a link to the .mp3 [ llnwd.net ] .
The title is " The Singularity : Your Future as a Black Hole .
" ( There 's also a video of this on FORA , but the sound really sucks and the excellent q/a session is omitted .
)</tokentext>
<sentencetext>There is another great Bruce, the author Bruce Sterling, who gave a great speech on this topic, really, the best talk on the whole internet as far as I know.
Here's a link to the .mp3 [llnwd.net].
The title is "The Singularity: Your Future as a Black Hole.
" (There's also a video of this on FORA, but the sound really sucks and the excellent q/a session is omitted.
)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095286</id>
	<title>This would be much more believeable....</title>
	<author>Hasai</author>
	<datestamp>1265042400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>....If I hadn't been around for the AI prognostications of the:</p><p>----60's<nobr> <wbr></nobr>:<i>"Real Soon Now!"</i><br>----70's<nobr> <wbr></nobr>:<i>"Real Soon Now!"</i><br>----80's<nobr> <wbr></nobr>:<i>"Real Soon Now!"</i><br>----90's<nobr> <wbr></nobr>:<i>"Real Soon Now!"</i><br>----00's<nobr> <wbr></nobr>:<i>"Real Soon Now!...."</i></p></htmltext>
<tokenext>....If I had n't been around for the AI prognostications of the : ----60 's : " Real Soon Now !
" ----70 's : " Real Soon Now !
" ----80 's : " Real Soon Now !
" ----90 's : " Real Soon Now !
" ----00 's : " Real Soon Now ! ... .
"</tokentext>
<sentencetext>....If I hadn't been around for the AI prognostications of the:----60's :"Real Soon Now!
"----70's :"Real Soon Now!
"----80's :"Real Soon Now!
"----90's :"Real Soon Now!
"----00's :"Real Soon Now!....
"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097430</id>
	<title>Re:Definitions</title>
	<author>keyboarderror</author>
	<datestamp>1265885280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>If life is an emergent property of the universe, and adaptation, intelligence, communication, and ultimately awareness an evolution of that, then the nature of the universe should allow the creation of consciousness (or the result we possess) in any sufficiently complex environment. It's mostly conditioning.</htmltext>
<tokenext>If life is an emergent property of the universe , and adaptation , intelligence , communication , and ultimately awareness an evolution of that , then the nature of the universe should allow the creation of consciousness ( or the result we possess ) in any sufficiently complex environment .
It 's mostly conditioning .</tokentext>
<sentencetext>If life is an emergent property of the universe, and adaptation, intelligence, communication, and ultimately awareness an evolution of that, then the nature of the universe should allow the creation of consciousness (or the result we possess) in any sufficiently complex environment.
It's mostly conditioning.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092796</id>
	<title>Oh really?</title>
	<author>runyonave</author>
	<datestamp>1265028600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Sounds more like sensationalism, and not fact.

Wasn't it just last year that some scientists built a super computer that has 25\% brain capacity of a rat?</htmltext>
<tokenext>Sounds more like sensationalism , and not fact .
Was n't it just last year that some scientists built a super computer that has 25 \ % brain capacity of a rat ?</tokentext>
<sentencetext>Sounds more like sensationalism, and not fact.
Wasn't it just last year that some scientists built a super computer that has 25\% brain capacity of a rat?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093570</id>
	<title>Re:Skewed sample</title>
	<author>googlesmith123</author>
	<datestamp>1265032620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Indeed.<br> <br>

To mention one use of AI in real life would be character recognition on car license plates. This has, for example, eliminated many toll booths in Oslo (Norway). Now you you just drive through and they send you a bill by snail mail.</htmltext>
<tokenext>Indeed .
To mention one use of AI in real life would be character recognition on car license plates .
This has , for example , eliminated many toll booths in Oslo ( Norway ) .
Now you you just drive through and they send you a bill by snail mail .</tokentext>
<sentencetext>Indeed.
To mention one use of AI in real life would be character recognition on car license plates.
This has, for example, eliminated many toll booths in Oslo (Norway).
Now you you just drive through and they send you a bill by snail mail.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092956</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097070</id>
	<title>Re:No way.</title>
	<author>jonaskoelker</author>
	<datestamp>1265879880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>They're really assuming that the technology will go from zero to sixty in 20 years.</p></div><p>My artificial intelligence quotient goes to sixty-one!</p></div>
	</htmltext>
<tokenext>They 're really assuming that the technology will go from zero to sixty in 20 years.My artificial intelligence quotient goes to sixty-one !</tokentext>
<sentencetext>They're really assuming that the technology will go from zero to sixty in 20 years.My artificial intelligence quotient goes to sixty-one!
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31103472</id>
	<title>To get back on topic, Yes. AI in 30 years or so.</title>
	<author>gestalt\_n\_pepper</author>
	<datestamp>1265920500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>But I wouldn't predict *how* we'll do it, however off the top of my head, I can think of a number of approaches:</p><p>Hybrid approaches (i.e. organic neural material interfacing with an artificial neural network).</p><p>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 1) Direct I/O to thousands of minds on the internet and their neural net assistants.</p><p>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 2) Artificial neural net interacting with artificial neural net.</p><p>Purely artificial AI:</p><p>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 1) IBM is reverse engineering not just neural networks, but neurons themselves (http://domino.watson.ibm.com/comm/research\_projects.nsf/pages/bmc\_modeling.index.html)</p><p>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 2) Some incremental useful, but not very humanlike AI like DARPAs (http://www.darpa.mil/ipto/programs/il/il.asp)</p><p>Look, AI, when it hits isn't going to be HAL or C-3PO. There's no inherent motivation for anything, including self preservation or any of that contextual stuff that we as living creatures have. It's no more going to resemble human intelligence than a helicopter resembles a European swallow, but it'll be useful and solve problems human cognition simply can't handle in a timeframe that matters.</p></htmltext>
<tokenext>But I would n't predict * how * we 'll do it , however off the top of my head , I can think of a number of approaches : Hybrid approaches ( i.e .
organic neural material interfacing with an artificial neural network ) .
                  1 ) Direct I/O to thousands of minds on the internet and their neural net assistants .
                  2 ) Artificial neural net interacting with artificial neural net.Purely artificial AI :                   1 ) IBM is reverse engineering not just neural networks , but neurons themselves ( http : //domino.watson.ibm.com/comm/research \ _projects.nsf/pages/bmc \ _modeling.index.html )                   2 ) Some incremental useful , but not very humanlike AI like DARPAs ( http : //www.darpa.mil/ipto/programs/il/il.asp ) Look , AI , when it hits is n't going to be HAL or C-3PO .
There 's no inherent motivation for anything , including self preservation or any of that contextual stuff that we as living creatures have .
It 's no more going to resemble human intelligence than a helicopter resembles a European swallow , but it 'll be useful and solve problems human cognition simply ca n't handle in a timeframe that matters .</tokentext>
<sentencetext>But I wouldn't predict *how* we'll do it, however off the top of my head, I can think of a number of approaches:Hybrid approaches (i.e.
organic neural material interfacing with an artificial neural network).
                  1) Direct I/O to thousands of minds on the internet and their neural net assistants.
                  2) Artificial neural net interacting with artificial neural net.Purely artificial AI:
                  1) IBM is reverse engineering not just neural networks, but neurons themselves (http://domino.watson.ibm.com/comm/research\_projects.nsf/pages/bmc\_modeling.index.html)
                  2) Some incremental useful, but not very humanlike AI like DARPAs (http://www.darpa.mil/ipto/programs/il/il.asp)Look, AI, when it hits isn't going to be HAL or C-3PO.
There's no inherent motivation for anything, including self preservation or any of that contextual stuff that we as living creatures have.
It's no more going to resemble human intelligence than a helicopter resembles a European swallow, but it'll be useful and solve problems human cognition simply can't handle in a timeframe that matters.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095490</id>
	<title>Re:No way.</title>
	<author>shentino</author>
	<datestamp>1265043960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Humans have this thing called "education" that takes place over a good 12 years or so.</p><p>I assume that an AI with a blank slate would need a similiar method of programming.</p></htmltext>
<tokenext>Humans have this thing called " education " that takes place over a good 12 years or so.I assume that an AI with a blank slate would need a similiar method of programming .</tokentext>
<sentencetext>Humans have this thing called "education" that takes place over a good 12 years or so.I assume that an AI with a blank slate would need a similiar method of programming.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093362</id>
	<title>Short Term Memory Loss</title>
	<author>Anonymous</author>
	<datestamp>1265031420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Have you people learnt nothing?</p><p>http://ask.slashdot.org/story/10/02/09/1654200/How-Do-You-Accurately-Estimate-Programming-Time</p></htmltext>
<tokenext>Have you people learnt nothing ? http : //ask.slashdot.org/story/10/02/09/1654200/How-Do-You-Accurately-Estimate-Programming-Time</tokentext>
<sentencetext>Have you people learnt nothing?http://ask.slashdot.org/story/10/02/09/1654200/How-Do-You-Accurately-Estimate-Programming-Time</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099040</id>
	<title>I don't get it...</title>
	<author>Fnkmaster</author>
	<datestamp>1265900880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If we really can build human-intelligence equivalent AI, then presumably we can also build human physical-equivalent robots.  At that point, we have the laborers and the thinkers.  As long as they can keep the electricity generation going and the food harvesting, we can all live lives of leisure, right?   I mean, what need is there for a capitalist system to motivate and direct human output when we don't need human output any more.</p><p>Just an interesting question to ponder.  I think the basic point is the world would probably end up looking very different than it does today.  I don't think we're going to end up with an AI lord overclass and a bunch of human underlings turning wrenches for them, since nobody has any incentive to let that happen.  We'd probably have a "Butlerian jihad" (pardon the Dune reference) before we'd let ourselves end up that way.</p></htmltext>
<tokenext>If we really can build human-intelligence equivalent AI , then presumably we can also build human physical-equivalent robots .
At that point , we have the laborers and the thinkers .
As long as they can keep the electricity generation going and the food harvesting , we can all live lives of leisure , right ?
I mean , what need is there for a capitalist system to motivate and direct human output when we do n't need human output any more.Just an interesting question to ponder .
I think the basic point is the world would probably end up looking very different than it does today .
I do n't think we 're going to end up with an AI lord overclass and a bunch of human underlings turning wrenches for them , since nobody has any incentive to let that happen .
We 'd probably have a " Butlerian jihad " ( pardon the Dune reference ) before we 'd let ourselves end up that way .</tokentext>
<sentencetext>If we really can build human-intelligence equivalent AI, then presumably we can also build human physical-equivalent robots.
At that point, we have the laborers and the thinkers.
As long as they can keep the electricity generation going and the food harvesting, we can all live lives of leisure, right?
I mean, what need is there for a capitalist system to motivate and direct human output when we don't need human output any more.Just an interesting question to ponder.
I think the basic point is the world would probably end up looking very different than it does today.
I don't think we're going to end up with an AI lord overclass and a bunch of human underlings turning wrenches for them, since nobody has any incentive to let that happen.
We'd probably have a "Butlerian jihad" (pardon the Dune reference) before we'd let ourselves end up that way.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095382</id>
	<title>Re:We make mistakes. We make games.</title>
	<author>Anonymous</author>
	<datestamp>1265043180000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Better war... I can't wait.</p></htmltext>
<tokenext>Better war... I ca n't wait .</tokentext>
<sentencetext>Better war... I can't wait.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092948</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094222</id>
	<title>Re:No way.</title>
	<author>baryluk</author>
	<datestamp>1265035380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>You underestimate the problem of making clean room. It is extreamly complicated. In fact in this problem is the essence of the AI!

I would bet for year 2049 as date when AI will pass Turing test. +15 years, and it will achive first scientific discovery beyond human (but this will still be something we will understand).

I could be wrong, but then i would put something like 2250, or never.</htmltext>
<tokenext>You underestimate the problem of making clean room .
It is extreamly complicated .
In fact in this problem is the essence of the AI !
I would bet for year 2049 as date when AI will pass Turing test .
+ 15 years , and it will achive first scientific discovery beyond human ( but this will still be something we will understand ) .
I could be wrong , but then i would put something like 2250 , or never .</tokentext>
<sentencetext>You underestimate the problem of making clean room.
It is extreamly complicated.
In fact in this problem is the essence of the AI!
I would bet for year 2049 as date when AI will pass Turing test.
+15 years, and it will achive first scientific discovery beyond human (but this will still be something we will understand).
I could be wrong, but then i would put something like 2250, or never.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094316</id>
	<title>Re:What is AI anyway?</title>
	<author>Anonymous</author>
	<datestamp>1265035800000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>See how well you can generate random inputs:<br>http://seed.ucsd.edu/~mindreader/</p></htmltext>
<tokenext>See how well you can generate random inputs : http : //seed.ucsd.edu/ ~ mindreader/</tokentext>
<sentencetext>See how well you can generate random inputs:http://seed.ucsd.edu/~mindreader/</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093120</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099026</id>
	<title>Re:Let's see.</title>
	<author>Shrike82</author>
	<datestamp>1265900880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Any creativity a robot contains would have come from our own instruction.</p></div><p>Much the same way it comes from the genes of our parents, our exposure to art, music and culture through electrical signals to our brain from our senses etc. Just because an AI would have to be initially programmed by humans doesn't mean that it wouldn't be able to paint a picture that people liked, or write a piece of music that people danced to. And in the end isn't that the point of creatvity, at least in terms of painting and music?</p></div>
	</htmltext>
<tokenext>Any creativity a robot contains would have come from our own instruction.Much the same way it comes from the genes of our parents , our exposure to art , music and culture through electrical signals to our brain from our senses etc .
Just because an AI would have to be initially programmed by humans does n't mean that it would n't be able to paint a picture that people liked , or write a piece of music that people danced to .
And in the end is n't that the point of creatvity , at least in terms of painting and music ?</tokentext>
<sentencetext>Any creativity a robot contains would have come from our own instruction.Much the same way it comes from the genes of our parents, our exposure to art, music and culture through electrical signals to our brain from our senses etc.
Just because an AI would have to be initially programmed by humans doesn't mean that it wouldn't be able to paint a picture that people liked, or write a piece of music that people danced to.
And in the end isn't that the point of creatvity, at least in terms of painting and music?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093820</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092788</id>
	<title>The obvious solution</title>
	<author>MindlessAutomata</author>
	<datestamp>1265028600000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>The obvious solution is to create a machine/AI that, after a deep brain structure analysis, replicates your cognitive functions.  Turn it on at the same time your body is destroyed (to prevent confusion and fighting between the two) and you are now a machine and ready to rule over the meatbag fleshlings.</p></htmltext>
<tokenext>The obvious solution is to create a machine/AI that , after a deep brain structure analysis , replicates your cognitive functions .
Turn it on at the same time your body is destroyed ( to prevent confusion and fighting between the two ) and you are now a machine and ready to rule over the meatbag fleshlings .</tokentext>
<sentencetext>The obvious solution is to create a machine/AI that, after a deep brain structure analysis, replicates your cognitive functions.
Turn it on at the same time your body is destroyed (to prevent confusion and fighting between the two) and you are now a machine and ready to rule over the meatbag fleshlings.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093560</id>
	<title>Re:Definitions</title>
	<author>pinkj</author>
	<datestamp>1265032620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Ingenuity? Humans seem to rule on this one.</p></div><p>

Yes.  I'll pay more attention to super-intelligent robots when they're able to make up a really good story while sitting with others at a campfire at night.</p></div>
	</htmltext>
<tokenext>Ingenuity ?
Humans seem to rule on this one .
Yes. I 'll pay more attention to super-intelligent robots when they 're able to make up a really good story while sitting with others at a campfire at night .</tokentext>
<sentencetext>Ingenuity?
Humans seem to rule on this one.
Yes.  I'll pay more attention to super-intelligent robots when they're able to make up a really good story while sitting with others at a campfire at night.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093172</id>
	<title>There's a very long way to go kids.</title>
	<author>ciw42</author>
	<datestamp>1265030400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Let's say today I develop a piece of software with all the same potential and cognitive power as a full human brain, and connect it up to a series of sensors which provide the same level and quality of information at the same rate as our own human senses do, and link in a series of mechanical limbs and a voice-box etc. with identical capabilities to those in our own bodies, then, when I flick the switch, even if I truly have created something which functions perfectly and in an identical way to a new born baby's brain and systems, it's still going to be up to eighteen months until it utters its first gibberish words, probably a year or more after that before it demonstrates signs of understanding what I'm saying and can respond verbally in a meaningful way, and a further seven or eight until it learns to play chess to even a basic level, let alone take on an IBM chess playing mainframe.</p><p>The reality is that not one of these pieces, mechanical or software is anywhere near existing, so I'd say 30 years is nothing like long enough for this to happen, but my point here is really that in order to know whether you'd been successful you'll have to wait a very long time whilst the learning and development process takes places, and we're in an industry that pretty much demands instant results and proof. Worse still, if even one little piece of the puzzle isn't perfect, then the whole thing may never develop at all, and the nature of true learning systems is that once they reach a certain (but still fairly minimal) level of complexity, the millisecond they start to learn they're out of the original developer's control, and so you'd probably never be able to identify why one version of your artificial being was successful but another wasn't.</p><p>And would people settle for something that was simply as good as a human? Probably not.</p></htmltext>
<tokenext>Let 's say today I develop a piece of software with all the same potential and cognitive power as a full human brain , and connect it up to a series of sensors which provide the same level and quality of information at the same rate as our own human senses do , and link in a series of mechanical limbs and a voice-box etc .
with identical capabilities to those in our own bodies , then , when I flick the switch , even if I truly have created something which functions perfectly and in an identical way to a new born baby 's brain and systems , it 's still going to be up to eighteen months until it utters its first gibberish words , probably a year or more after that before it demonstrates signs of understanding what I 'm saying and can respond verbally in a meaningful way , and a further seven or eight until it learns to play chess to even a basic level , let alone take on an IBM chess playing mainframe.The reality is that not one of these pieces , mechanical or software is anywhere near existing , so I 'd say 30 years is nothing like long enough for this to happen , but my point here is really that in order to know whether you 'd been successful you 'll have to wait a very long time whilst the learning and development process takes places , and we 're in an industry that pretty much demands instant results and proof .
Worse still , if even one little piece of the puzzle is n't perfect , then the whole thing may never develop at all , and the nature of true learning systems is that once they reach a certain ( but still fairly minimal ) level of complexity , the millisecond they start to learn they 're out of the original developer 's control , and so you 'd probably never be able to identify why one version of your artificial being was successful but another was n't.And would people settle for something that was simply as good as a human ?
Probably not .</tokentext>
<sentencetext>Let's say today I develop a piece of software with all the same potential and cognitive power as a full human brain, and connect it up to a series of sensors which provide the same level and quality of information at the same rate as our own human senses do, and link in a series of mechanical limbs and a voice-box etc.
with identical capabilities to those in our own bodies, then, when I flick the switch, even if I truly have created something which functions perfectly and in an identical way to a new born baby's brain and systems, it's still going to be up to eighteen months until it utters its first gibberish words, probably a year or more after that before it demonstrates signs of understanding what I'm saying and can respond verbally in a meaningful way, and a further seven or eight until it learns to play chess to even a basic level, let alone take on an IBM chess playing mainframe.The reality is that not one of these pieces, mechanical or software is anywhere near existing, so I'd say 30 years is nothing like long enough for this to happen, but my point here is really that in order to know whether you'd been successful you'll have to wait a very long time whilst the learning and development process takes places, and we're in an industry that pretty much demands instant results and proof.
Worse still, if even one little piece of the puzzle isn't perfect, then the whole thing may never develop at all, and the nature of true learning systems is that once they reach a certain (but still fairly minimal) level of complexity, the millisecond they start to learn they're out of the original developer's control, and so you'd probably never be able to identify why one version of your artificial being was successful but another wasn't.And would people settle for something that was simply as good as a human?
Probably not.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096430</id>
	<title>Actual Factual Data?</title>
	<author>Anonymous</author>
	<datestamp>1265052240000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>This article is quick to point at a survey, but makes absolutely no factual claim that AI is any more or less likely with any sort of data.</p><p>I hope when such an AI exists it can filter articles like this from<nobr> <wbr></nobr>/.</p></htmltext>
<tokenext>This article is quick to point at a survey , but makes absolutely no factual claim that AI is any more or less likely with any sort of data.I hope when such an AI exists it can filter articles like this from / .</tokentext>
<sentencetext>This article is quick to point at a survey, but makes absolutely no factual claim that AI is any more or less likely with any sort of data.I hope when such an AI exists it can filter articles like this from /.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093288</id>
	<title>Re:This touches on a problem I have</title>
	<author>Garble Snarky</author>
	<datestamp>1265030940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Your spelling is really tripping up my parser. "imagin id", "scake"? Are you drunk?
<br> <br>
Anyway... what purpose does wealth serve when common products no longer cost any money (or human labor) to produce? If robots can do all the farming and manufacturing, then there is no reason for a consumer to pay for these things. It would take some time to ramp up production to that point, but its exponential: after you build the first robot that can build and repair other robots, all you have to do is sit back and watch the robot population increase.
<br> <br>
We could have a fully automated system where any food, any electronics, any luxury items, are all automatically produced and delivered to the local store, or even to your door. Human economies were built on top of human nature, which apparently tends to deal with scarce resources by claiming them. When we totally eliminate scarcity, we eliminate the need for our current economic models.</htmltext>
<tokenext>Your spelling is really tripping up my parser .
" imagin id " , " scake " ?
Are you drunk ?
Anyway... what purpose does wealth serve when common products no longer cost any money ( or human labor ) to produce ?
If robots can do all the farming and manufacturing , then there is no reason for a consumer to pay for these things .
It would take some time to ramp up production to that point , but its exponential : after you build the first robot that can build and repair other robots , all you have to do is sit back and watch the robot population increase .
We could have a fully automated system where any food , any electronics , any luxury items , are all automatically produced and delivered to the local store , or even to your door .
Human economies were built on top of human nature , which apparently tends to deal with scarce resources by claiming them .
When we totally eliminate scarcity , we eliminate the need for our current economic models .</tokentext>
<sentencetext>Your spelling is really tripping up my parser.
"imagin id", "scake"?
Are you drunk?
Anyway... what purpose does wealth serve when common products no longer cost any money (or human labor) to produce?
If robots can do all the farming and manufacturing, then there is no reason for a consumer to pay for these things.
It would take some time to ramp up production to that point, but its exponential: after you build the first robot that can build and repair other robots, all you have to do is sit back and watch the robot population increase.
We could have a fully automated system where any food, any electronics, any luxury items, are all automatically produced and delivered to the local store, or even to your door.
Human economies were built on top of human nature, which apparently tends to deal with scarce resources by claiming them.
When we totally eliminate scarcity, we eliminate the need for our current economic models.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092968</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096896</id>
	<title>Re:Current computation models not enough</title>
	<author>DeltaQH</author>
	<datestamp>1265921520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Answer to your thought experiment.<br> <br>
We dont have yet neuron replacements.....</htmltext>
<tokenext>Answer to your thought experiment .
We dont have yet neuron replacements.... .</tokentext>
<sentencetext>Answer to your thought experiment.
We dont have yet neuron replacements.....</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095824</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093820</id>
	<title>Re:Let's see.</title>
	<author>zeroRenegade</author>
	<datestamp>1265033640000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext>Awesome quote.

The stuff people imagine is hysterical.

For a robot to evolve a free will, it will be given to him by humans, so in essence it is not free at all. If robots are evil, it is because people are inherently evil and program it to think methodically instead of compassionately.

It is easy to program the functionalism of a human mind, but behaviorism will never be fully understood.

Computers are already superhuman in many ways, but to compose music, write classic literature, cook lavish meals, it will never ever happen. Keep dreaming dreamers.

Any creativity a robot contains would have come from our own instruction.</div>
	</htmltext>
<tokenext>Awesome quote .
The stuff people imagine is hysterical .
For a robot to evolve a free will , it will be given to him by humans , so in essence it is not free at all .
If robots are evil , it is because people are inherently evil and program it to think methodically instead of compassionately .
It is easy to program the functionalism of a human mind , but behaviorism will never be fully understood .
Computers are already superhuman in many ways , but to compose music , write classic literature , cook lavish meals , it will never ever happen .
Keep dreaming dreamers .
Any creativity a robot contains would have come from our own instruction .</tokentext>
<sentencetext>Awesome quote.
The stuff people imagine is hysterical.
For a robot to evolve a free will, it will be given to him by humans, so in essence it is not free at all.
If robots are evil, it is because people are inherently evil and program it to think methodically instead of compassionately.
It is easy to program the functionalism of a human mind, but behaviorism will never be fully understood.
Computers are already superhuman in many ways, but to compose music, write classic literature, cook lavish meals, it will never ever happen.
Keep dreaming dreamers.
Any creativity a robot contains would have come from our own instruction.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092804</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094644</id>
	<title>Re:The Turing Test</title>
	<author>Rennt</author>
	<datestamp>1265037780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>This kind of thinking is one of the major things standing in the way of AGI. The complex behaviors of the human mind are what leads to intelligence, they do not detract from it. Our ability to uncover the previously unknown workings of a system comes from our ability to abstract aspects of unrelated experiences and apply/attempt to apply them to the new situation. This can not be achieved by a single-minded number crunching machine, but instead evolves out of an adaptable human being as he goes about his daily life.</p></div><p>Perhaps. But maybe human behaviour isn't as complex as we think it is. We have made progress in AI by studying insects. It was found that despite very complex interactions, insects simply respond to stimuli in a predictable way. This research has brought us stuff like teams of soccer playing robots.</p><p>Now the average human is capable of much more complex interactions then an ant. You might argue that the human is infinitely more complex, but that sounds like "irreducible complexity" to me so doesn't fly. Despite notions of "specialness" we still just responding to stimuli - there is just more inputs, bandwith, and interconnects. What I'm getting at is complex human-level interactions could be implemented in relatively simple and maintainable software.</p><p><div class="quote"><p>Finally, the assertion that an AGI would need to mask it's amazing intellect to pass as human is silly. When was the last time you read a particularly insightful comment and concluded that it was written by a computer? When did you notice that the spelling and punctuation in a comment was too perfect? People see that and they don't think anything of it.</p></div><p>That wasn't his point. He was saying that AI will never be like human intelligence because we would not implement some of the flaws.</p></div>
	</htmltext>
<tokenext>This kind of thinking is one of the major things standing in the way of AGI .
The complex behaviors of the human mind are what leads to intelligence , they do not detract from it .
Our ability to uncover the previously unknown workings of a system comes from our ability to abstract aspects of unrelated experiences and apply/attempt to apply them to the new situation .
This can not be achieved by a single-minded number crunching machine , but instead evolves out of an adaptable human being as he goes about his daily life.Perhaps .
But maybe human behaviour is n't as complex as we think it is .
We have made progress in AI by studying insects .
It was found that despite very complex interactions , insects simply respond to stimuli in a predictable way .
This research has brought us stuff like teams of soccer playing robots.Now the average human is capable of much more complex interactions then an ant .
You might argue that the human is infinitely more complex , but that sounds like " irreducible complexity " to me so does n't fly .
Despite notions of " specialness " we still just responding to stimuli - there is just more inputs , bandwith , and interconnects .
What I 'm getting at is complex human-level interactions could be implemented in relatively simple and maintainable software.Finally , the assertion that an AGI would need to mask it 's amazing intellect to pass as human is silly .
When was the last time you read a particularly insightful comment and concluded that it was written by a computer ?
When did you notice that the spelling and punctuation in a comment was too perfect ?
People see that and they do n't think anything of it.That was n't his point .
He was saying that AI will never be like human intelligence because we would not implement some of the flaws .</tokentext>
<sentencetext>This kind of thinking is one of the major things standing in the way of AGI.
The complex behaviors of the human mind are what leads to intelligence, they do not detract from it.
Our ability to uncover the previously unknown workings of a system comes from our ability to abstract aspects of unrelated experiences and apply/attempt to apply them to the new situation.
This can not be achieved by a single-minded number crunching machine, but instead evolves out of an adaptable human being as he goes about his daily life.Perhaps.
But maybe human behaviour isn't as complex as we think it is.
We have made progress in AI by studying insects.
It was found that despite very complex interactions, insects simply respond to stimuli in a predictable way.
This research has brought us stuff like teams of soccer playing robots.Now the average human is capable of much more complex interactions then an ant.
You might argue that the human is infinitely more complex, but that sounds like "irreducible complexity" to me so doesn't fly.
Despite notions of "specialness" we still just responding to stimuli - there is just more inputs, bandwith, and interconnects.
What I'm getting at is complex human-level interactions could be implemented in relatively simple and maintainable software.Finally, the assertion that an AGI would need to mask it's amazing intellect to pass as human is silly.
When was the last time you read a particularly insightful comment and concluded that it was written by a computer?
When did you notice that the spelling and punctuation in a comment was too perfect?
People see that and they don't think anything of it.That wasn't his point.
He was saying that AI will never be like human intelligence because we would not implement some of the flaws.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093108</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092968</id>
	<title>This touches on a problem I have</title>
	<author>geekoid</author>
	<datestamp>1265029440000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>thought about a lot..maybe too much.</p><p>What happens in society when someone makes a robot clever enough  to handle menial work?<br>Imagin id all Ditch diggers, burger flippers and sandwich maker, factory workers are all robotic? What happens to the people?<br>The false claim is that they will go work in the robot industry, but that is a misdirection, at best.<br>A) It will take less people to maintain them then the jobs they displace.</p><p>B) If robots are that sophisticated, then they can repair each other.</p><p>There will be millions and million of people who don't work, and have no option to work.<br>Does this mean there is a fundamental shift in the idea of welfare? do we only allow individual people to own them and choose between renting out their robot or working themselves?</p><p>Having 10's million of people too poor to eat properly, afford housing, and healthcare is a bad thing and would ultimately drag down the country. This technology will happen and it should happen. Personally I'd like to find a way for people to have more leisure time and let the robots work. Our current economic and government structure can't handle this kind of change. Could you imagine the hellabalu if people where being replaced by robots at this scake right now is someone said there needs to be a shift toward an economic place where people get paid without a job?</p></htmltext>
<tokenext>thought about a lot..maybe too much.What happens in society when someone makes a robot clever enough to handle menial work ? Imagin id all Ditch diggers , burger flippers and sandwich maker , factory workers are all robotic ?
What happens to the people ? The false claim is that they will go work in the robot industry , but that is a misdirection , at best.A ) It will take less people to maintain them then the jobs they displace.B ) If robots are that sophisticated , then they can repair each other.There will be millions and million of people who do n't work , and have no option to work.Does this mean there is a fundamental shift in the idea of welfare ?
do we only allow individual people to own them and choose between renting out their robot or working themselves ? Having 10 's million of people too poor to eat properly , afford housing , and healthcare is a bad thing and would ultimately drag down the country .
This technology will happen and it should happen .
Personally I 'd like to find a way for people to have more leisure time and let the robots work .
Our current economic and government structure ca n't handle this kind of change .
Could you imagine the hellabalu if people where being replaced by robots at this scake right now is someone said there needs to be a shift toward an economic place where people get paid without a job ?</tokentext>
<sentencetext>thought about a lot..maybe too much.What happens in society when someone makes a robot clever enough  to handle menial work?Imagin id all Ditch diggers, burger flippers and sandwich maker, factory workers are all robotic?
What happens to the people?The false claim is that they will go work in the robot industry, but that is a misdirection, at best.A) It will take less people to maintain them then the jobs they displace.B) If robots are that sophisticated, then they can repair each other.There will be millions and million of people who don't work, and have no option to work.Does this mean there is a fundamental shift in the idea of welfare?
do we only allow individual people to own them and choose between renting out their robot or working themselves?Having 10's million of people too poor to eat properly, afford housing, and healthcare is a bad thing and would ultimately drag down the country.
This technology will happen and it should happen.
Personally I'd like to find a way for people to have more leisure time and let the robots work.
Our current economic and government structure can't handle this kind of change.
Could you imagine the hellabalu if people where being replaced by robots at this scake right now is someone said there needs to be a shift toward an economic place where people get paid without a job?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095864</id>
	<title>Flying cars.</title>
	<author>barfy</author>
	<datestamp>1265046840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Flying cars</p></htmltext>
<tokenext>Flying cars</tokentext>
<sentencetext>Flying cars</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096292</id>
	<title>Re:Let's see.</title>
	<author>Prof.Phreak</author>
	<datestamp>1265050860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You're assuming AI will be programmed, or that we are capable of programming it. Chances are, AI will be evolved (and then self-evolve), with humans having little understanding of how or why. It will just be. Chances are, it will know more about us and itself better than we know about it.</p></htmltext>
<tokenext>You 're assuming AI will be programmed , or that we are capable of programming it .
Chances are , AI will be evolved ( and then self-evolve ) , with humans having little understanding of how or why .
It will just be .
Chances are , it will know more about us and itself better than we know about it .</tokentext>
<sentencetext>You're assuming AI will be programmed, or that we are capable of programming it.
Chances are, AI will be evolved (and then self-evolve), with humans having little understanding of how or why.
It will just be.
Chances are, it will know more about us and itself better than we know about it.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093820</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093050</id>
	<title>Depends what you want. They're great at chess.</title>
	<author>fragmatic43</author>
	<datestamp>1265029860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>In the good old days, I could sucker Sargon into so many stupid moves. Now the chess programs are great. They're usually better than almost every human. So in that field, they beat humans.

And arithmetic? They run rings around us, although they do make some mistakes with odd floating point problems.<nobr> <wbr></nobr>:-)</htmltext>
<tokenext>In the good old days , I could sucker Sargon into so many stupid moves .
Now the chess programs are great .
They 're usually better than almost every human .
So in that field , they beat humans .
And arithmetic ?
They run rings around us , although they do make some mistakes with odd floating point problems .
: - )</tokentext>
<sentencetext>In the good old days, I could sucker Sargon into so many stupid moves.
Now the chess programs are great.
They're usually better than almost every human.
So in that field, they beat humans.
And arithmetic?
They run rings around us, although they do make some mistakes with odd floating point problems.
:-)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31100958</id>
	<title>Re:What kind of jobs will there be?</title>
	<author>Xanator</author>
	<datestamp>1265910360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Robots will do all the restocking of the shelves and cashiers in stores, there will probably be McRobots instead of McDonalds. </p></div><p>
Why would we use a very expensive robot to do menial work that a human with minimum wage can do?<br> <br>
No, the ones that should be worried are the ones that their work could not only be replaced but improved by a robot, see lets say some architect which has a huge salary, a robot with his abilities would replace him and save money for the company.
<br> <br>I think that when that happens humans will be the ones doing the menial work, not robots</p></div>
	</htmltext>
<tokenext>Robots will do all the restocking of the shelves and cashiers in stores , there will probably be McRobots instead of McDonalds .
Why would we use a very expensive robot to do menial work that a human with minimum wage can do ?
No , the ones that should be worried are the ones that their work could not only be replaced but improved by a robot , see lets say some architect which has a huge salary , a robot with his abilities would replace him and save money for the company .
I think that when that happens humans will be the ones doing the menial work , not robots</tokentext>
<sentencetext>Robots will do all the restocking of the shelves and cashiers in stores, there will probably be McRobots instead of McDonalds.
Why would we use a very expensive robot to do menial work that a human with minimum wage can do?
No, the ones that should be worried are the ones that their work could not only be replaced but improved by a robot, see lets say some architect which has a huge salary, a robot with his abilities would replace him and save money for the company.
I think that when that happens humans will be the ones doing the menial work, not robots
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093540</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098312</id>
	<title>Re:No way.</title>
	<author>Anonymous</author>
	<datestamp>1265895840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>In April <strong>last year</strong>, a robot made a scientific discovery by itself: <a href="http://www.wired.com/wiredscience/2009/04/robotscientist/" title="wired.com" rel="nofollow">http://www.wired.com/wiredscience/2009/04/robotscientist/</a> [wired.com]</htmltext>
<tokenext>In April last year , a robot made a scientific discovery by itself : http : //www.wired.com/wiredscience/2009/04/robotscientist/ [ wired.com ]</tokentext>
<sentencetext>In April last year, a robot made a scientific discovery by itself: http://www.wired.com/wiredscience/2009/04/robotscientist/ [wired.com]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094222</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093246</id>
	<title>I thought everyone knew the answer</title>
	<author>HangingChad</author>
	<datestamp>1265030760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p> <i>August 29, 1997 Skynet becomes self-aware at 02:14 am</i>

</p><p>Do I win a prize?</p></htmltext>
<tokenext>August 29 , 1997 Skynet becomes self-aware at 02 : 14 am Do I win a prize ?</tokentext>
<sentencetext> August 29, 1997 Skynet becomes self-aware at 02:14 am

Do I win a prize?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900</id>
	<title>Definitions</title>
	<author>Anonymous</author>
	<datestamp>1265029080000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>Please define "intelligence."</p><p>Calculation speed?  An abacus was smarter than humans.</p><p>Memory?  Not sure who wins that.</p><p>Ingenuity?  Humans seem to rule on this one.  I don't know if I count analyzing every single possible permutation of outcomes as "ingenuity."  And I'm not sure we really understand what creativity, ingenuity, etc., really are in our brains.</p><p>Consciousness?  We can barely define that, let alone define it for a computer.</p><p>It seems most people seem to think "calculation speed and memory" when they talk about computer "intelligence."</p></htmltext>
<tokenext>Please define " intelligence .
" Calculation speed ?
An abacus was smarter than humans.Memory ?
Not sure who wins that.Ingenuity ?
Humans seem to rule on this one .
I do n't know if I count analyzing every single possible permutation of outcomes as " ingenuity .
" And I 'm not sure we really understand what creativity , ingenuity , etc. , really are in our brains.Consciousness ?
We can barely define that , let alone define it for a computer.It seems most people seem to think " calculation speed and memory " when they talk about computer " intelligence .
"</tokentext>
<sentencetext>Please define "intelligence.
"Calculation speed?
An abacus was smarter than humans.Memory?
Not sure who wins that.Ingenuity?
Humans seem to rule on this one.
I don't know if I count analyzing every single possible permutation of outcomes as "ingenuity.
"  And I'm not sure we really understand what creativity, ingenuity, etc., really are in our brains.Consciousness?
We can barely define that, let alone define it for a computer.It seems most people seem to think "calculation speed and memory" when they talk about computer "intelligence.
"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094406</id>
	<title>You define intelligence as what your brain's doing</title>
	<author>uassholes</author>
	<datestamp>1265036160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Which is why you are wrong.  All of these replies about "never" and "we heard that 50 years ago" are due to you (and the naive researchers 50 years ago) deciding that you are intelligent, and that that's what intelligence is.  Yet many of you watched the Super Bowl.
</p><p>
The CPU in our heads contains portions that originated with mud slime.  There is no need to duplicate that, nor is intelligence involved in most of the things that our brains do.   A space alien could never pass a Turing test, and it's a stupid conceit on our parts to torture a computer into trying to, like Dr. Frankenstein making his monster.
</p><p>
Maybe there never will be a software simulacrum fit for a freak show, but software constantly becomes more "intelligent" in a more objective sense.</p></htmltext>
<tokenext>Which is why you are wrong .
All of these replies about " never " and " we heard that 50 years ago " are due to you ( and the naive researchers 50 years ago ) deciding that you are intelligent , and that that 's what intelligence is .
Yet many of you watched the Super Bowl .
The CPU in our heads contains portions that originated with mud slime .
There is no need to duplicate that , nor is intelligence involved in most of the things that our brains do .
A space alien could never pass a Turing test , and it 's a stupid conceit on our parts to torture a computer into trying to , like Dr. Frankenstein making his monster .
Maybe there never will be a software simulacrum fit for a freak show , but software constantly becomes more " intelligent " in a more objective sense .</tokentext>
<sentencetext>Which is why you are wrong.
All of these replies about "never" and "we heard that 50 years ago" are due to you (and the naive researchers 50 years ago) deciding that you are intelligent, and that that's what intelligence is.
Yet many of you watched the Super Bowl.
The CPU in our heads contains portions that originated with mud slime.
There is no need to duplicate that, nor is intelligence involved in most of the things that our brains do.
A space alien could never pass a Turing test, and it's a stupid conceit on our parts to torture a computer into trying to, like Dr. Frankenstein making his monster.
Maybe there never will be a software simulacrum fit for a freak show, but software constantly becomes more "intelligent" in a more objective sense.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094264</id>
	<title>Re:What is AI anyway?</title>
	<author>DMUTPeregrine</author>
	<datestamp>1265035560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Just tried picking 60 numbers from 1-6:<br>
1s:13<br>
2s:10<br>
3s:9<br>
4s:8<br>
5s:12<br>
6s:8<br>
Chi^2=2.2<br> <br>
Rather good, actually. p-value 0.821<br>

And rolling a 6-sided die 60 times:<br>
1s:11<br>
2s:17<br>
3s:8<br>
4s:9<br>
5s:6<br>
6s:9<br>
Chi^2=7.2<br> <br>
I think I have a biased die, but p-value of 0.206 isn't enough to be sure...<br>

And using random.org's 6-sided die rng 60 times:<br>
1s:11<br>
2s:5<br>
3s:10<br>
4s:11<br>
5s:11<br>
6s:12<br>
Chi^2=3.2<br> <br>
p-value 0.669.
<br> <br>
So I did better than random.org. Anecdote, data, etc, etc.<br>
My sequence: 514321345112365412356125124545312635142354611236656416235412<br>
My die rolls: 134616625343623215212135142224622164432515642111622264345232<br>
Random.org: 165433336162363432153514436154465154564615544632166142512651</htmltext>
<tokenext>Just tried picking 60 numbers from 1-6 : 1s : 13 2s : 10 3s : 9 4s : 8 5s : 12 6s : 8 Chi ^ 2 = 2.2 Rather good , actually .
p-value 0.821 And rolling a 6-sided die 60 times : 1s : 11 2s : 17 3s : 8 4s : 9 5s : 6 6s : 9 Chi ^ 2 = 7.2 I think I have a biased die , but p-value of 0.206 is n't enough to be sure.. . And using random.org 's 6-sided die rng 60 times : 1s : 11 2s : 5 3s : 10 4s : 11 5s : 11 6s : 12 Chi ^ 2 = 3.2 p-value 0.669 .
So I did better than random.org .
Anecdote , data , etc , etc .
My sequence : 514321345112365412356125124545312635142354611236656416235412 My die rolls : 134616625343623215212135142224622164432515642111622264345232 Random.org : 165433336162363432153514436154465154564615544632166142512651</tokentext>
<sentencetext>Just tried picking 60 numbers from 1-6:
1s:13
2s:10
3s:9
4s:8
5s:12
6s:8
Chi^2=2.2 
Rather good, actually.
p-value 0.821

And rolling a 6-sided die 60 times:
1s:11
2s:17
3s:8
4s:9
5s:6
6s:9
Chi^2=7.2 
I think I have a biased die, but p-value of 0.206 isn't enough to be sure...

And using random.org's 6-sided die rng 60 times:
1s:11
2s:5
3s:10
4s:11
5s:11
6s:12
Chi^2=3.2 
p-value 0.669.
So I did better than random.org.
Anecdote, data, etc, etc.
My sequence: 514321345112365412356125124545312635142354611236656416235412
My die rolls: 134616625343623215212135142224622164432515642111622264345232
Random.org: 165433336162363432153514436154465154564615544632166142512651</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093120</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098204</id>
	<title>Al will surpass human intelligence when...</title>
	<author>aardwolf64</author>
	<datestamp>1265894640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Ok, so Al may not ever surpass human intelligence, but he will be getting close to matching us when he finally gives up that Global Warming hoax of his... and the whole concept that he invented the Internet.</p></htmltext>
<tokenext>Ok , so Al may not ever surpass human intelligence , but he will be getting close to matching us when he finally gives up that Global Warming hoax of his... and the whole concept that he invented the Internet .</tokentext>
<sentencetext>Ok, so Al may not ever surpass human intelligence, but he will be getting close to matching us when he finally gives up that Global Warming hoax of his... and the whole concept that he invented the Internet.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31186230</id>
	<title>Reasonable Assumption</title>
	<author>Tibia1</author>
	<datestamp>1266515580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Everyone keeps talking about how people won't be able to understand our own brains, and that 'since we haven't come "very far" in the past 50 years in AI, we're doomed to wait an extremely long time'. <br> <br>People forget what we've already created. Targeted 'narrow' AI that can learn (currently at the basic level) and produce data that help humans draw conclusions. Conclusions that couldn't have been drawn without narrow AI applications. <br> <br>
I believe that when the brain scanning technology becomes advanced enough (extremely high resolution and high quality modelling of the brain), we will use narrow AI (that learns by itself how to examine the scans we've produced), and the data that AI produces will lead to our understanding of how the brain really works and how to replicate in in a machine. This subject is highly debatable at this stage in the game, but I think that even 20 years is a long time before we'll have the brain scans needed to preform in depth analysis on what makes us so smart.<br> <br>
Many people simply reject the idea that we'll ever be capable of producing a machine more intelligent than us. The only thing that could stop us from that is our own destruction in these years to come where civilization is so utterly fragile.</htmltext>
<tokenext>Everyone keeps talking about how people wo n't be able to understand our own brains , and that 'since we have n't come " very far " in the past 50 years in AI , we 're doomed to wait an extremely long time' .
People forget what we 've already created .
Targeted 'narrow ' AI that can learn ( currently at the basic level ) and produce data that help humans draw conclusions .
Conclusions that could n't have been drawn without narrow AI applications .
I believe that when the brain scanning technology becomes advanced enough ( extremely high resolution and high quality modelling of the brain ) , we will use narrow AI ( that learns by itself how to examine the scans we 've produced ) , and the data that AI produces will lead to our understanding of how the brain really works and how to replicate in in a machine .
This subject is highly debatable at this stage in the game , but I think that even 20 years is a long time before we 'll have the brain scans needed to preform in depth analysis on what makes us so smart .
Many people simply reject the idea that we 'll ever be capable of producing a machine more intelligent than us .
The only thing that could stop us from that is our own destruction in these years to come where civilization is so utterly fragile .</tokentext>
<sentencetext>Everyone keeps talking about how people won't be able to understand our own brains, and that 'since we haven't come "very far" in the past 50 years in AI, we're doomed to wait an extremely long time'.
People forget what we've already created.
Targeted 'narrow' AI that can learn (currently at the basic level) and produce data that help humans draw conclusions.
Conclusions that couldn't have been drawn without narrow AI applications.
I believe that when the brain scanning technology becomes advanced enough (extremely high resolution and high quality modelling of the brain), we will use narrow AI (that learns by itself how to examine the scans we've produced), and the data that AI produces will lead to our understanding of how the brain really works and how to replicate in in a machine.
This subject is highly debatable at this stage in the game, but I think that even 20 years is a long time before we'll have the brain scans needed to preform in depth analysis on what makes us so smart.
Many people simply reject the idea that we'll ever be capable of producing a machine more intelligent than us.
The only thing that could stop us from that is our own destruction in these years to come where civilization is so utterly fragile.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093512</id>
	<title>Don't Even Think About It</title>
	<author>NicknamesAreStupid</author>
	<datestamp>1265032380000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext>Machines will only 'think' like humans when they have human emotions.  All reasoning and abstract thought are based on emotions, which were the basis of all human interaction for countless millennia before humans spoke words.  We will never believe that machines or anything else can be 'human-like' unless we feel it.  Just look at the Loebner contest (http://www.loebner.net).  Since there is no machine algorithm for this test (duh!), they use people to make subjective decisions as to whether unseen respondents 'seem' human.  If the responses do not 'seem' right, then the respondent does not pass.  It is amazing how many humans (used as controls) do not pass this Turing test, giving new meaning to "you don't feel right to me."  Without human feelings there would be no human reasoning, no 'intelligence.'  If this reasoning really bothers you, then you have helped prove my point.
<br> <br>
As for these AI guys, their conclusions are something of a paradox.  If they are as wrong as some believe and dumb as others say, then it may not take much more to create a machine to be as 'intelligent.'  Their question may be better put, "when will we feel that humans have become as dumb as their machines?"</htmltext>
<tokenext>Machines will only 'think ' like humans when they have human emotions .
All reasoning and abstract thought are based on emotions , which were the basis of all human interaction for countless millennia before humans spoke words .
We will never believe that machines or anything else can be 'human-like ' unless we feel it .
Just look at the Loebner contest ( http : //www.loebner.net ) .
Since there is no machine algorithm for this test ( duh !
) , they use people to make subjective decisions as to whether unseen respondents 'seem ' human .
If the responses do not 'seem ' right , then the respondent does not pass .
It is amazing how many humans ( used as controls ) do not pass this Turing test , giving new meaning to " you do n't feel right to me .
" Without human feelings there would be no human reasoning , no 'intelligence .
' If this reasoning really bothers you , then you have helped prove my point .
As for these AI guys , their conclusions are something of a paradox .
If they are as wrong as some believe and dumb as others say , then it may not take much more to create a machine to be as 'intelligent .
' Their question may be better put , " when will we feel that humans have become as dumb as their machines ?
"</tokentext>
<sentencetext>Machines will only 'think' like humans when they have human emotions.
All reasoning and abstract thought are based on emotions, which were the basis of all human interaction for countless millennia before humans spoke words.
We will never believe that machines or anything else can be 'human-like' unless we feel it.
Just look at the Loebner contest (http://www.loebner.net).
Since there is no machine algorithm for this test (duh!
), they use people to make subjective decisions as to whether unseen respondents 'seem' human.
If the responses do not 'seem' right, then the respondent does not pass.
It is amazing how many humans (used as controls) do not pass this Turing test, giving new meaning to "you don't feel right to me.
"  Without human feelings there would be no human reasoning, no 'intelligence.
'  If this reasoning really bothers you, then you have helped prove my point.
As for these AI guys, their conclusions are something of a paradox.
If they are as wrong as some believe and dumb as others say, then it may not take much more to create a machine to be as 'intelligent.
'  Their question may be better put, "when will we feel that humans have become as dumb as their machines?
"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095024</id>
	<title>Re:Space shows</title>
	<author>Anonymous</author>
	<datestamp>1265040300000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>Do you humor chimpanzees for leading to our evolutionary branch? Sure we keep a few in zoos, but what about all the others whose habitat we regularly destroy for our own selfish needs? Do you think an AI would act differently? Is there some threshold of intelligence where suddenly you care about humoring ants, or does nearly everyone not pay them any attention and try to eradicate them if they get in the way or become pests?</p></htmltext>
<tokenext>Do you humor chimpanzees for leading to our evolutionary branch ?
Sure we keep a few in zoos , but what about all the others whose habitat we regularly destroy for our own selfish needs ?
Do you think an AI would act differently ?
Is there some threshold of intelligence where suddenly you care about humoring ants , or does nearly everyone not pay them any attention and try to eradicate them if they get in the way or become pests ?</tokentext>
<sentencetext>Do you humor chimpanzees for leading to our evolutionary branch?
Sure we keep a few in zoos, but what about all the others whose habitat we regularly destroy for our own selfish needs?
Do you think an AI would act differently?
Is there some threshold of intelligence where suddenly you care about humoring ants, or does nearly everyone not pay them any attention and try to eradicate them if they get in the way or become pests?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092918</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093120</id>
	<title>Re:What is AI anyway?</title>
	<author>Daniel Dvorkin</author>
	<datestamp>1265030160000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p><i>How does the brain choose a random number?</i></p><p>It tells the body to roll a die.  If you try to pick random numbers by just thinking about it, you'll do a spectacularly bad job.</p></htmltext>
<tokenext>How does the brain choose a random number ? It tells the body to roll a die .
If you try to pick random numbers by just thinking about it , you 'll do a spectacularly bad job .</tokentext>
<sentencetext>How does the brain choose a random number?It tells the body to roll a die.
If you try to pick random numbers by just thinking about it, you'll do a spectacularly bad job.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095390</id>
	<title>Re:Space shows</title>
	<author>Anonymous</author>
	<datestamp>1265043180000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>read that book too<br>down and out in the magic kingdom</p></htmltext>
<tokenext>read that book toodown and out in the magic kingdom</tokentext>
<sentencetext>read that book toodown and out in the magic kingdom</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092918</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092788
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095052
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_82</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093392
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_45</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093440
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094626
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31101484
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_93</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092968
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31102748
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_143</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093120
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094316
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_69</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092812
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093124
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_55</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092918
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095390
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_119</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093166
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093556
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092788
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099608
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_105</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093196
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_79</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092956
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093570
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_63</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093502
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_129</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097140
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093560
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_73</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092948
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096320
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093044
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096494
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094222
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098312
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_132</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093890
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31100616
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_58</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097858
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_44</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092956
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31100358
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_90</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093204
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_108</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092968
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093288
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_140</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093120
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094264
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096370
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_68</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093120
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098138
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_52</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099384
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_118</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092804
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095834
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092788
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094174
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_76</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097144
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092812
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093132
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_87</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092968
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093150
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_62</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092788
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31100128
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095272
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_124</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093112
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095824
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096896
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_135</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099998
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_97</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092778
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093136
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_84</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092962
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093350
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_95</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093108
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094544
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_145</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093734
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098822
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092948
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094048
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093112
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094558
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_111</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092788
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096386
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_39</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095490
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097096
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_71</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093172
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31231188
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093108
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099204
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_121</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092948
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095382
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093120
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094264
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095498
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092948
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096432
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_86</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092812
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093524
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093198
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31101440
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_148</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097256
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_134</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093112
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094358
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_92</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092948
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097012
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_142</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092918
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095210
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_100</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093108
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097166
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097062
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_60</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093112
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097260
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093108
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098156
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_110</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092804
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093386
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_78</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097430
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_89</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096146
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_36</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093402
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097272
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_128</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093788
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_47</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093108
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096156
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093082
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094262
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097372
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_139</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093082
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093994
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_99</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092850
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093476
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_57</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093540
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31100958
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093120
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094264
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31107498
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_41</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092804
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093820
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099026
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_107</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092896
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093996
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_65</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31113952
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_51</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094030
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_115</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092788
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093144
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_109</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093112
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096304
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_113</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097070
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_75</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092788
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31101048
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_123</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097880
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092788
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31108988
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_81</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093198
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098652
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_88</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093120
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094264
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31110374
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_46</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092968
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31110232
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_54</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093044
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093624
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_40</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095042
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_104</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092968
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093448
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_102</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092968
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097662
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_64</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093082
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093876
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_126</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096336
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_137</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31111262
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_112</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093566
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_38</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092968
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093256
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093108
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094644
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_49</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092918
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099186
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093082
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31100454
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_70</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093732
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_147</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093112
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095856
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_120</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094542
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_131</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31106974
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_59</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092918
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31114038
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_80</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092788
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095550
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_91</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095192
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_141</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095248
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_67</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098044
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093082
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099110
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_117</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092968
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31100748
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092918
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096474
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_77</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092948
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095168
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_35</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093120
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098408
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_127</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092948
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094802
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_136</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093778
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_48</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31103452
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_94</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092968
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093284
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_144</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092788
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096174
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_130</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092968
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31107112
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_56</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092918
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095024
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093234
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095370
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_106</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094206
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_66</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094156
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093108
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094118
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_116</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092788
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098116
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_74</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094608
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093108
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099618
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092918
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093818
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_85</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093446
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_43</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093044
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31110412
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_149</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092948
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097128
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_53</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092918
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096836
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_103</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092804
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093820
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096292
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_101</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092804
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093314
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096806
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092850
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093178
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_61</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093152
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_125</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097000
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31100648
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_37</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092948
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096038
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_83</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093252
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_138</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092804
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099454
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_98</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093890
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095616
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_96</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31100494
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_42</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092788
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098588
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_146</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092804
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31101904
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097336
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094150
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_50</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093044
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095808
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_114</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092916
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093326
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092948
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096478
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_72</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092948
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094878
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_122</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093670
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_2323248_133</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092850
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096116
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.39</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31101104
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.38</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092822
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092900
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098044
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097430
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31100494
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093502
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097062
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095042
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097096
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093890
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095616
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31100616
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093560
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096146
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31106974
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31111262
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093204
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092948
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095168
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097012
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096432
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096320
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094878
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096038
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094802
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094048
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097128
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096478
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095382
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092788
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31100128
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31108988
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096386
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098588
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095052
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095550
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093144
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098116
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099608
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31101048
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094174
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096174
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.40</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093456
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093020
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096422
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092956
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31100358
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093570
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092778
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093136
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095194
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093166
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093556
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093198
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31101440
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098652
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093402
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097272
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092810
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093082
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093994
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093876
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31100454
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099110
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094262
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093044
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093624
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095808
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31110412
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096494
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095850
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092968
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097662
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093284
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31110232
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093288
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093448
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31102748
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093256
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31100748
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31107112
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093150
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092918
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096836
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093818
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095210
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31114038
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095390
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096474
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099186
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095024
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.35</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31109628
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093172
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31231188
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092988
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093670
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094030
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094206
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31100648
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097000
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093392
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096336
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097140
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097336
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094542
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093732
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093120
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094316
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098408
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094264
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096370
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31107498
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31110374
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095498
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098138
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094150
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096806
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097372
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093788
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097144
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093108
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098156
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096156
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097166
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099204
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099618
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094118
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094544
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094644
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094748
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098634
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092850
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093178
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096116
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093476
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.36</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092896
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093996
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093112
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094358
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094558
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096304
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095824
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096896
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097260
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095856
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092804
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095834
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093314
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099454
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31101904
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093820
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099026
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096292
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093386
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092764
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095272
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099384
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093566
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095192
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093440
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094626
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31101484
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31103452
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095490
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31099998
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093234
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095370
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093196
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094222
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098312
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093152
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31113952
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31095248
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093446
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094156
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097880
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097256
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094608
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093252
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093778
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097858
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31097070
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092812
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093132
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093524
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093124
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094556
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092916
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093326
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.37</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093540
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31100958
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093734
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31098822
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31094350
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092962
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31093350
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092840
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31092724
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_2323248.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_2323248.31096538
</commentlist>
</conversation>
