<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article10_03_31_1440205</id>
	<title>MIT Finds 'Grand Unified Theory of AI'</title>
	<author>CmdrTaco</author>
	<datestamp>1270051020000</datestamp>
	<htmltext>aftab14 writes <i>"'What's brilliant about this (approach) is that it allows you to <a href="http://web.mit.edu/newsoffice/2010/ai-unification.html">build a cognitive model</a> in a much more straightforward and transparent way than you could do before,' says Nick Chater, a professor of cognitive and decision sciences at University College London. 'You can imagine all the things that a human knows, and trying to list those would just be an endless task, and it might even be an infinite task. But the magic trick is saying, "No, no, just tell me a few things," and then the brain &mdash; or in this case the Church system, hopefully somewhat analogous to the way the mind does it &mdash; can churn out, using its probabilistic calculation, all the consequences and inferences. And also, when you give the system new information, it can figure out the consequences of that.'"</i></htmltext>
<tokenext>aftab14 writes " 'What 's brilliant about this ( approach ) is that it allows you to build a cognitive model in a much more straightforward and transparent way than you could do before, ' says Nick Chater , a professor of cognitive and decision sciences at University College London .
'You can imagine all the things that a human knows , and trying to list those would just be an endless task , and it might even be an infinite task .
But the magic trick is saying , " No , no , just tell me a few things , " and then the brain    or in this case the Church system , hopefully somewhat analogous to the way the mind does it    can churn out , using its probabilistic calculation , all the consequences and inferences .
And also , when you give the system new information , it can figure out the consequences of that .
' "</tokentext>
<sentencetext>aftab14 writes "'What's brilliant about this (approach) is that it allows you to build a cognitive model in a much more straightforward and transparent way than you could do before,' says Nick Chater, a professor of cognitive and decision sciences at University College London.
'You can imagine all the things that a human knows, and trying to list those would just be an endless task, and it might even be an infinite task.
But the magic trick is saying, "No, no, just tell me a few things," and then the brain — or in this case the Church system, hopefully somewhat analogous to the way the mind does it — can churn out, using its probabilistic calculation, all the consequences and inferences.
And also, when you give the system new information, it can figure out the consequences of that.
'"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688806</id>
	<title>AI 101</title>
	<author>Anonymous</author>
	<datestamp>1270054860000</datestamp>
	<modclass>None</modclass>
	<modscore>-1</modscore>
	<htmltext><p>So they discovered prolog?</p></htmltext>
<tokenext>So they discovered prolog ?</tokentext>
<sentencetext>So they discovered prolog?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689128</id>
	<title>This looks familiar</title>
	<author>Meditato</author>
	<datestamp>1270056180000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext>I looked at the documentation of this "Church Programming language". Scheme and most other Lisp derivatives have been around longer and can do more. This is neither news nor a revolutionary discovery.</htmltext>
<tokenext>I looked at the documentation of this " Church Programming language " .
Scheme and most other Lisp derivatives have been around longer and can do more .
This is neither news nor a revolutionary discovery .</tokentext>
<sentencetext>I looked at the documentation of this "Church Programming language".
Scheme and most other Lisp derivatives have been around longer and can do more.
This is neither news nor a revolutionary discovery.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689628</id>
	<title>Elephant in the Room</title>
	<author>Anonymous</author>
	<datestamp>1270058340000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext><p>Again, as I bring up often with AI researchers, we as humans evolved over millions of years (or were created, doesn't matter) from simple organisms that encoded information that built up simple systems into complex systems. AI, true AI, must be grown, not created. Asking the AI if a Bat is a mammal and can fly can a squirrel? ignores a foundation of development in intelligence, our brains were created to react and store, not store and react from various inputs.</p><p>Ask an AI if the stove is hot. It should respond "I don't know, where is the stove?" Rather AI would try and make an inference based on known data. Since there isn't any the AI on a probablistic measure would say that blah blah stoves are in use at any given time and there is a blah blah blah. A human would put thier hand (a senor) near the stove and measure the change, if any in temperature and reply yes or no accordingly. If a human cannot see the stove, and had no additional information either a random guess is in order or a "I have no clue." response of some sort. The brain isn't wired to answer a specific question but it is wired to correlate independent inputs to draw conclusions based on the assembly and interaction of data and infer and deduce answers.</p><p>Given a film of two people talking a computer with decent AI would catagorize objects, identify people versus say a lamp, determine the people are engaged in action (versus a lamp just sitting there) making that relevant, hear the sound coming from the people then infer they are talking (making the link.) Then paralell the computer would filter out the chair, and various scenery in the thread now processing "CONVERSATION". The rest of the information is stored and additional threads may be created as the environment generates other links but if the AI is paying attention to the conversation then the TTL for the new threads and links should be short. When the conversation mentions the LAMP the information network should link the LAMP information to the CONVERSATION thread and provide the AI additional information (that was gathering in the background) that travels with the CONVERSATION thread.</p><p>Now the conversation appears to be about the lamp and wheather it goes with the room's decor. Again the links should be built adding, retroactively the room's information into the CONVERSATION thread (again expiring information that is irrelivant to a short term memory buffer) and ultimately since visual and verbal queues imply that the AI's opinion is wanted should result in the AI blurting out, "I love Lamp."</p><p>In case you missed it, this was one long Lamp joke...</p></htmltext>
<tokenext>Again , as I bring up often with AI researchers , we as humans evolved over millions of years ( or were created , does n't matter ) from simple organisms that encoded information that built up simple systems into complex systems .
AI , true AI , must be grown , not created .
Asking the AI if a Bat is a mammal and can fly can a squirrel ?
ignores a foundation of development in intelligence , our brains were created to react and store , not store and react from various inputs.Ask an AI if the stove is hot .
It should respond " I do n't know , where is the stove ?
" Rather AI would try and make an inference based on known data .
Since there is n't any the AI on a probablistic measure would say that blah blah stoves are in use at any given time and there is a blah blah blah .
A human would put thier hand ( a senor ) near the stove and measure the change , if any in temperature and reply yes or no accordingly .
If a human can not see the stove , and had no additional information either a random guess is in order or a " I have no clue .
" response of some sort .
The brain is n't wired to answer a specific question but it is wired to correlate independent inputs to draw conclusions based on the assembly and interaction of data and infer and deduce answers.Given a film of two people talking a computer with decent AI would catagorize objects , identify people versus say a lamp , determine the people are engaged in action ( versus a lamp just sitting there ) making that relevant , hear the sound coming from the people then infer they are talking ( making the link .
) Then paralell the computer would filter out the chair , and various scenery in the thread now processing " CONVERSATION " .
The rest of the information is stored and additional threads may be created as the environment generates other links but if the AI is paying attention to the conversation then the TTL for the new threads and links should be short .
When the conversation mentions the LAMP the information network should link the LAMP information to the CONVERSATION thread and provide the AI additional information ( that was gathering in the background ) that travels with the CONVERSATION thread.Now the conversation appears to be about the lamp and wheather it goes with the room 's decor .
Again the links should be built adding , retroactively the room 's information into the CONVERSATION thread ( again expiring information that is irrelivant to a short term memory buffer ) and ultimately since visual and verbal queues imply that the AI 's opinion is wanted should result in the AI blurting out , " I love Lamp .
" In case you missed it , this was one long Lamp joke.. .</tokentext>
<sentencetext>Again, as I bring up often with AI researchers, we as humans evolved over millions of years (or were created, doesn't matter) from simple organisms that encoded information that built up simple systems into complex systems.
AI, true AI, must be grown, not created.
Asking the AI if a Bat is a mammal and can fly can a squirrel?
ignores a foundation of development in intelligence, our brains were created to react and store, not store and react from various inputs.Ask an AI if the stove is hot.
It should respond "I don't know, where is the stove?
" Rather AI would try and make an inference based on known data.
Since there isn't any the AI on a probablistic measure would say that blah blah stoves are in use at any given time and there is a blah blah blah.
A human would put thier hand (a senor) near the stove and measure the change, if any in temperature and reply yes or no accordingly.
If a human cannot see the stove, and had no additional information either a random guess is in order or a "I have no clue.
" response of some sort.
The brain isn't wired to answer a specific question but it is wired to correlate independent inputs to draw conclusions based on the assembly and interaction of data and infer and deduce answers.Given a film of two people talking a computer with decent AI would catagorize objects, identify people versus say a lamp, determine the people are engaged in action (versus a lamp just sitting there) making that relevant, hear the sound coming from the people then infer they are talking (making the link.
) Then paralell the computer would filter out the chair, and various scenery in the thread now processing "CONVERSATION".
The rest of the information is stored and additional threads may be created as the environment generates other links but if the AI is paying attention to the conversation then the TTL for the new threads and links should be short.
When the conversation mentions the LAMP the information network should link the LAMP information to the CONVERSATION thread and provide the AI additional information (that was gathering in the background) that travels with the CONVERSATION thread.Now the conversation appears to be about the lamp and wheather it goes with the room's decor.
Again the links should be built adding, retroactively the room's information into the CONVERSATION thread (again expiring information that is irrelivant to a short term memory buffer) and ultimately since visual and verbal queues imply that the AI's opinion is wanted should result in the AI blurting out, "I love Lamp.
"In case you missed it, this was one long Lamp joke...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31691128</id>
	<title>Re:Interesting Idea</title>
	<author>kdemetter</author>
	<datestamp>1270064520000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>I have learned today that putting 'grand' and 'unified' at the title of an idea in science is very powerful for marketing.</p></div><p>I admit "MIT Finds Theory of AI " does sound a lot less interesting , though it's probably closer to the truth.</p></div>
	</htmltext>
<tokenext>I have learned today that putting 'grand ' and 'unified ' at the title of an idea in science is very powerful for marketing.I admit " MIT Finds Theory of AI " does sound a lot less interesting , though it 's probably closer to the truth .</tokentext>
<sentencetext>I have learned today that putting 'grand' and 'unified' at the title of an idea in science is very powerful for marketing.I admit "MIT Finds Theory of AI " does sound a lot less interesting , though it's probably closer to the truth.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689496</id>
	<title>Re:Can I get some wafers with that Wine?</title>
	<author>Bigjeff5</author>
	<datestamp>1270057860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Oh yeah, lets not show any respect at all to one of the greatest AI minds in history because you happen to dislike churches.</p><p>Asshole.</p></htmltext>
<tokenext>Oh yeah , lets not show any respect at all to one of the greatest AI minds in history because you happen to dislike churches.Asshole .</tokentext>
<sentencetext>Oh yeah, lets not show any respect at all to one of the greatest AI minds in history because you happen to dislike churches.Asshole.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688868</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689202</id>
	<title>Re:Endless vs. infinite</title>
	<author>Monkeedude1212</author>
	<datestamp>1270056480000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext><p>Simple. One doesn't end and the other goes on forever.</p></htmltext>
<tokenext>Simple .
One does n't end and the other goes on forever .</tokentext>
<sentencetext>Simple.
One doesn't end and the other goes on forever.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688842</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688868</id>
	<title>Can I get some wafers with that Wine?</title>
	<author>gabereiser</author>
	<datestamp>1270055100000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>-1</modscore>
	<htmltext>umm, the Church System?  Does it come complete with molestation?  How about cover ups?  Does it support cover ups?  Seriously, what a terrible name for a "cognitive AI" system.</htmltext>
<tokenext>umm , the Church System ?
Does it come complete with molestation ?
How about cover ups ?
Does it support cover ups ?
Seriously , what a terrible name for a " cognitive AI " system .</tokentext>
<sentencetext>umm, the Church System?
Does it come complete with molestation?
How about cover ups?
Does it support cover ups?
Seriously, what a terrible name for a "cognitive AI" system.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31691804</id>
	<title>Re:This looks familiar</title>
	<author>Angst Badger</author>
	<datestamp>1270067160000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>I looked at the documentation of this "Church Programming language". Scheme and most other Lisp derivatives have been around longer and can do more.</p></div><p>Not only that, but more recent languages support actual syntax so that the user does not have to provide the parse tree himself.</p></div>
	</htmltext>
<tokenext>I looked at the documentation of this " Church Programming language " .
Scheme and most other Lisp derivatives have been around longer and can do more.Not only that , but more recent languages support actual syntax so that the user does not have to provide the parse tree himself .</tokentext>
<sentencetext>I looked at the documentation of this "Church Programming language".
Scheme and most other Lisp derivatives have been around longer and can do more.Not only that, but more recent languages support actual syntax so that the user does not have to provide the parse tree himself.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689128</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31692808</id>
	<title>Re:Interesting Idea</title>
	<author>Anonymous</author>
	<datestamp>1270027560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>So if you have a suspected witch, but *don't* have a duck handy, you can use a ship?  Science is wonderful!</p></htmltext>
<tokenext>So if you have a suspected witch , but * do n't * have a duck handy , you can use a ship ?
Science is wonderful !</tokentext>
<sentencetext>So if you have a suspected witch, but *don't* have a duck handy, you can use a ship?
Science is wonderful!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690384</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690340</id>
	<title>In Russian Accent</title>
	<author>Anonymous</author>
	<datestamp>1270061460000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>In Russia, Cognitive Model builds you</p></htmltext>
<tokenext>In Russia , Cognitive Model builds you</tokentext>
<sentencetext>In Russia, Cognitive Model builds you</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690050</id>
	<title>FINALLY!</title>
	<author>Hurricane78</author>
	<datestamp>1270060200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I have always said, that psychology and nowadays even neurology, suffer massively from the <a href="http://en.wikipedia.org/wiki/Rube\_Goldberg\_machine" title="wikipedia.org">Rube-Goldberg-machine</a> [wikipedia.org] syndrome. The brain is an extremely <a href="http://en.wikipedia.org/wiki/Emergence" title="wikipedia.org">emergent</a> [wikipedia.org]. Perhaps the most emergent system known to man. <em>Compared to the results</em>, the basic rules are extremely simple. But they seem to try to analyze all those resulting effects, as if they were additional specific rules, instead of just results of the basic rules.</p><p>I am absolutely certain, that if you create a set of simulated life-forms based on &ldquo;blank&ldquo; neural nets of a sufficient size, including hormones / neurotransmitters, and let them evolve trough natural selection so they modify themselves, that it is only a matter of time, until you will come up with a working life-form of the same or higher intelligence than a human. Of course this life-form will have a different base layout if it has different priorities. But there is no need for any additional rules, other than those.</p><p>And I am also certain, that I will be proven right in my lifetime.<nobr> <wbr></nobr>:)</p></htmltext>
<tokenext>I have always said , that psychology and nowadays even neurology , suffer massively from the Rube-Goldberg-machine [ wikipedia.org ] syndrome .
The brain is an extremely emergent [ wikipedia.org ] .
Perhaps the most emergent system known to man .
Compared to the results , the basic rules are extremely simple .
But they seem to try to analyze all those resulting effects , as if they were additional specific rules , instead of just results of the basic rules.I am absolutely certain , that if you create a set of simulated life-forms based on    blank    neural nets of a sufficient size , including hormones / neurotransmitters , and let them evolve trough natural selection so they modify themselves , that it is only a matter of time , until you will come up with a working life-form of the same or higher intelligence than a human .
Of course this life-form will have a different base layout if it has different priorities .
But there is no need for any additional rules , other than those.And I am also certain , that I will be proven right in my lifetime .
: )</tokentext>
<sentencetext>I have always said, that psychology and nowadays even neurology, suffer massively from the Rube-Goldberg-machine [wikipedia.org] syndrome.
The brain is an extremely emergent [wikipedia.org].
Perhaps the most emergent system known to man.
Compared to the results, the basic rules are extremely simple.
But they seem to try to analyze all those resulting effects, as if they were additional specific rules, instead of just results of the basic rules.I am absolutely certain, that if you create a set of simulated life-forms based on “blank“ neural nets of a sufficient size, including hormones / neurotransmitters, and let them evolve trough natural selection so they modify themselves, that it is only a matter of time, until you will come up with a working life-form of the same or higher intelligence than a human.
Of course this life-form will have a different base layout if it has different priorities.
But there is no need for any additional rules, other than those.And I am also certain, that I will be proven right in my lifetime.
:)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690186</id>
	<title>Re:Elephant in the Room</title>
	<author>Hurricane78</author>
	<datestamp>1270060680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I can imagine the AI running on Linux. But what cruel bastard would let a poor AI run on PHP and MySQL?? It has done nothing to you!<br>It&rsquo;s like holding a newborn baby in a room full of booby traps, spikes, and on tranquilizers that make it move in extreme slow-motion. But at least without windows!<nobr> <wbr></nobr>;)</p></htmltext>
<tokenext>I can imagine the AI running on Linux .
But what cruel bastard would let a poor AI run on PHP and MySQL ? ?
It has done nothing to you ! It    s like holding a newborn baby in a room full of booby traps , spikes , and on tranquilizers that make it move in extreme slow-motion .
But at least without windows !
; )</tokentext>
<sentencetext>I can imagine the AI running on Linux.
But what cruel bastard would let a poor AI run on PHP and MySQL??
It has done nothing to you!It’s like holding a newborn baby in a room full of booby traps, spikes, and on tranquilizers that make it move in extreme slow-motion.
But at least without windows!
;)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689628</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31697176</id>
	<title>Re:Elephant in the Room</title>
	<author>Anonymous</author>
	<datestamp>1270054560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>so my computer would be judging the room's decor?<br>next it will questioning my fashion sense</p></htmltext>
<tokenext>so my computer would be judging the room 's decor ? next it will questioning my fashion sense</tokentext>
<sentencetext>so my computer would be judging the room's decor?next it will questioning my fashion sense</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689628</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688916</id>
	<title>Probabilistic Inference?</title>
	<author>xtracto</author>
	<datestamp>1270055340000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>This kind of probabilistic inference approach with "new information" [evidence] being used to figure out "consequences" [probability of an event happening] sounds very similar to Bayesian inference/networks.</p><p>I would be interested in knowing how does this approach compares to BN and the Transferable Belief Model (or <a href="http://en.wikipedia.org/wiki/Dempster-Shafer\_theory" title="wikipedia.org" rel="nofollow">Dempster&ndash;Shafer theory</a> [wikipedia.org]) which itself addresses some shortcomings of BN.</p></htmltext>
<tokenext>This kind of probabilistic inference approach with " new information " [ evidence ] being used to figure out " consequences " [ probability of an event happening ] sounds very similar to Bayesian inference/networks.I would be interested in knowing how does this approach compares to BN and the Transferable Belief Model ( or Dempster    Shafer theory [ wikipedia.org ] ) which itself addresses some shortcomings of BN .</tokentext>
<sentencetext>This kind of probabilistic inference approach with "new information" [evidence] being used to figure out "consequences" [probability of an event happening] sounds very similar to Bayesian inference/networks.I would be interested in knowing how does this approach compares to BN and the Transferable Belief Model (or Dempster–Shafer theory [wikipedia.org]) which itself addresses some shortcomings of BN.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689892</id>
	<title>Re:Interesting Idea</title>
	<author>astar</author>
	<datestamp>1270059540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>this AI model seems to be nothing more than something out of hilbert and whitehead program,  really lousy as demoed by godel, but so very attractive to the common positivist</p><p>for a little deeper treatment, I guess goethe on euler is appropriate</p></htmltext>
<tokenext>this AI model seems to be nothing more than something out of hilbert and whitehead program , really lousy as demoed by godel , but so very attractive to the common positivistfor a little deeper treatment , I guess goethe on euler is appropriate</tokentext>
<sentencetext>this AI model seems to be nothing more than something out of hilbert and whitehead program,  really lousy as demoed by godel, but so very attractive to the common positivistfor a little deeper treatment, I guess goethe on euler is appropriate</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690526</id>
	<title>Re:The real summary</title>
	<author>geekoid</author>
	<datestamp>1270062180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>helicopters have wing,dumb ass.</p><p>Do you even know how one works? IF you say "by pressing down air equal to the weight of the vehicle" you are wrong.</p><p>The have 4 wings, or more, that spin to create lift.</p></htmltext>
<tokenext>helicopters have wing,dumb ass.Do you even know how one works ?
IF you say " by pressing down air equal to the weight of the vehicle " you are wrong.The have 4 wings , or more , that spin to create lift .</tokentext>
<sentencetext>helicopters have wing,dumb ass.Do you even know how one works?
IF you say "by pressing down air equal to the weight of the vehicle" you are wrong.The have 4 wings, or more, that spin to create lift.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688888</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688924</id>
	<title>Grand unified Hyperbole of AI</title>
	<author>Anonymous</author>
	<datestamp>1270055340000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext>HYPE.  More grand unified hype.  The "grand unified theory" is just a mashup of old-days rules &amp; inferences engines thrown in with probabilistic models.  Hyperbole at its finest, to call it a grand unified theory of AI.  Where are connotations and framing effects?  How does working short term memory interact with LTM and how does Miller magic number show up?  How can the system understand that "john is a wolf with the ladies" without thinking that john is hairy and likes to bark at the moon?  I could go on but feel free to fill in the blanks.  So long and thanks for all the fish MIT.</htmltext>
<tokenext>HYPE .
More grand unified hype .
The " grand unified theory " is just a mashup of old-days rules &amp; inferences engines thrown in with probabilistic models .
Hyperbole at its finest , to call it a grand unified theory of AI .
Where are connotations and framing effects ?
How does working short term memory interact with LTM and how does Miller magic number show up ?
How can the system understand that " john is a wolf with the ladies " without thinking that john is hairy and likes to bark at the moon ?
I could go on but feel free to fill in the blanks .
So long and thanks for all the fish MIT .</tokentext>
<sentencetext>HYPE.
More grand unified hype.
The "grand unified theory" is just a mashup of old-days rules &amp; inferences engines thrown in with probabilistic models.
Hyperbole at its finest, to call it a grand unified theory of AI.
Where are connotations and framing effects?
How does working short term memory interact with LTM and how does Miller magic number show up?
How can the system understand that "john is a wolf with the ladies" without thinking that john is hairy and likes to bark at the moon?
I could go on but feel free to fill in the blanks.
So long and thanks for all the fish MIT.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689822</id>
	<title>Probability: The Logic of Science by Jaynes</title>
	<author>Singularitarian2048</author>
	<datestamp>1270059240000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext>FTA: "In the 1950s and 60s, artificial-intelligence researchers saw themselves as trying to uncover the rules of thought. But those rules turned out to be way more complicated than anyone had imagined. Since then, artificial-intelligence (AI) research has come to rely, instead, on probabilities -- statistical patterns that computers can learn from large sets of training data."<br><br>From the viewpoint of Jaynes and many Bayesians, probability IS simply the rules of thought.</htmltext>
<tokenext>FTA : " In the 1950s and 60s , artificial-intelligence researchers saw themselves as trying to uncover the rules of thought .
But those rules turned out to be way more complicated than anyone had imagined .
Since then , artificial-intelligence ( AI ) research has come to rely , instead , on probabilities -- statistical patterns that computers can learn from large sets of training data .
" From the viewpoint of Jaynes and many Bayesians , probability IS simply the rules of thought .</tokentext>
<sentencetext>FTA: "In the 1950s and 60s, artificial-intelligence researchers saw themselves as trying to uncover the rules of thought.
But those rules turned out to be way more complicated than anyone had imagined.
Since then, artificial-intelligence (AI) research has come to rely, instead, on probabilities -- statistical patterns that computers can learn from large sets of training data.
"From the viewpoint of Jaynes and many Bayesians, probability IS simply the rules of thought.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690592</id>
	<title>Re:Interesting Idea</title>
	<author>EventHorizon\_pc</author>
	<datestamp>1270062420000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>I believe you just came up with the grand unified theory of science and marketing!  I'm sure it's 50\% more optimal than current theory.  (I cringed just writing that...)</p></htmltext>
<tokenext>I believe you just came up with the grand unified theory of science and marketing !
I 'm sure it 's 50 \ % more optimal than current theory .
( I cringed just writing that... )</tokentext>
<sentencetext>I believe you just came up with the grand unified theory of science and marketing!
I'm sure it's 50\% more optimal than current theory.
(I cringed just writing that...)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689392</id>
	<title>Basically...</title>
	<author>Anonymous</author>
	<datestamp>1270057380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>The key to Artificial Intelligence is to ignore the "intelligence" part and just think of it as Artificial <i>Behavior</i>.</htmltext>
<tokenext>The key to Artificial Intelligence is to ignore the " intelligence " part and just think of it as Artificial Behavior .</tokentext>
<sentencetext>The key to Artificial Intelligence is to ignore the "intelligence" part and just think of it as Artificial Behavior.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689724</id>
	<title>Re:New input for the system</title>
	<author>Jorl17</author>
	<datestamp>1270058760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Reddish bananas fly across my bellybutton.
<br> <br>
I just love that challenge.</htmltext>
<tokenext>Reddish bananas fly across my bellybutton .
I just love that challenge .</tokentext>
<sentencetext>Reddish bananas fly across my bellybutton.
I just love that challenge.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688902</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689786</id>
	<title>MIT needs to get their PR department under control</title>
	<author>Animats</author>
	<datestamp>1270059120000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>
This is embarrassing.  MIT needs to get their PR department under control.  They're inflating small advances into major breakthroughs.  That's bad for MIT's reputation.  When a real breakthrough does come from MIT, which happens now and then, they won't have credibility.
</p><p>
Stanford and CMU seem to generate more results and less hype.</p></htmltext>
<tokenext>This is embarrassing .
MIT needs to get their PR department under control .
They 're inflating small advances into major breakthroughs .
That 's bad for MIT 's reputation .
When a real breakthrough does come from MIT , which happens now and then , they wo n't have credibility .
Stanford and CMU seem to generate more results and less hype .</tokentext>
<sentencetext>
This is embarrassing.
MIT needs to get their PR department under control.
They're inflating small advances into major breakthroughs.
That's bad for MIT's reputation.
When a real breakthrough does come from MIT, which happens now and then, they won't have credibility.
Stanford and CMU seem to generate more results and less hype.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689300</id>
	<title>Re:Interesting Idea</title>
	<author>blahplusplus</author>
	<datestamp>1270056960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>"that things over 200 lbs are unlikely to fly. But wait, 747s are heavier than that."</p><p>But as a GENERAL RULE most things \_that cannot fly\_ fly without understanding of aerodynamics and having the ability to make them fly (i.e. engines, jet fuel, understanding of lift, etc).  A 747 didn't just appear one day it was a gradual process of testing and figuring out the principles of flight.  Birds existed prior to 747's.</p></htmltext>
<tokenext>" that things over 200 lbs are unlikely to fly .
But wait , 747s are heavier than that .
" But as a GENERAL RULE most things \ _that can not fly \ _ fly without understanding of aerodynamics and having the ability to make them fly ( i.e .
engines , jet fuel , understanding of lift , etc ) .
A 747 did n't just appear one day it was a gradual process of testing and figuring out the principles of flight .
Birds existed prior to 747 's .</tokentext>
<sentencetext>"that things over 200 lbs are unlikely to fly.
But wait, 747s are heavier than that.
"But as a GENERAL RULE most things \_that cannot fly\_ fly without understanding of aerodynamics and having the ability to make them fly (i.e.
engines, jet fuel, understanding of lift, etc).
A 747 didn't just appear one day it was a gradual process of testing and figuring out the principles of flight.
Birds existed prior to 747's.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690824</id>
	<title>So...</title>
	<author>Anonymous</author>
	<datestamp>1270063260000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>So... Facebook and Twitter are the best AI we have?</p></htmltext>
<tokenext>So... Facebook and Twitter are the best AI we have ?</tokentext>
<sentencetext>So... Facebook and Twitter are the best AI we have?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31697740</id>
	<title>Just a thought</title>
	<author>UK Boz</author>
	<datestamp>1270061520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Whos to say AI's dont exist already and post on slashdot, would it be in their interests to make it public?</htmltext>
<tokenext>Whos to say AI 's dont exist already and post on slashdot , would it be in their interests to make it public ?</tokentext>
<sentencetext>Whos to say AI's dont exist already and post on slashdot, would it be in their interests to make it public?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690564</id>
	<title>Re:Interesting Idea</title>
	<author>Anonymous</author>
	<datestamp>1270062300000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><p>This is the exact idea I saw in a Masters project at my University that was completed several years ago.<br>A student created a modified version of Prolog that would work with probabilities. It was very powerful and was used in the military for some expert systems.</p><p>This is nothing new. Probabilistic logic has been around for a very long time.</p><p>My Master's advisor also created a similar system for Lisp, which is exactly what this is.</p></htmltext>
<tokenext>This is the exact idea I saw in a Masters project at my University that was completed several years ago.A student created a modified version of Prolog that would work with probabilities .
It was very powerful and was used in the military for some expert systems.This is nothing new .
Probabilistic logic has been around for a very long time.My Master 's advisor also created a similar system for Lisp , which is exactly what this is .</tokentext>
<sentencetext>This is the exact idea I saw in a Masters project at my University that was completed several years ago.A student created a modified version of Prolog that would work with probabilities.
It was very powerful and was used in the military for some expert systems.This is nothing new.
Probabilistic logic has been around for a very long time.My Master's advisor also created a similar system for Lisp, which is exactly what this is.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689126</id>
	<title>Re:The real summary</title>
	<author>Trepidity</author>
	<datestamp>1270056180000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>Mostly, he or his university are just really good at overselling. There are dozens of attempts to combine something like probabilistic inference with something more like logical inference, many of which have associated languages, and it's not clear this one solves any of the problems they have any better.</p></htmltext>
<tokenext>Mostly , he or his university are just really good at overselling .
There are dozens of attempts to combine something like probabilistic inference with something more like logical inference , many of which have associated languages , and it 's not clear this one solves any of the problems they have any better .</tokentext>
<sentencetext>Mostly, he or his university are just really good at overselling.
There are dozens of attempts to combine something like probabilistic inference with something more like logical inference, many of which have associated languages, and it's not clear this one solves any of the problems they have any better.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688888</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31698632</id>
	<title>Re:Interesting Idea</title>
	<author>Anonymous</author>
	<datestamp>1270118040000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Well I was working with 6 guys who worked at EADS designing airplanes and 4 of them were scared of flying.</p><p>Beats me.</p></htmltext>
<tokenext>Well I was working with 6 guys who worked at EADS designing airplanes and 4 of them were scared of flying.Beats me .</tokentext>
<sentencetext>Well I was working with 6 guys who worked at EADS designing airplanes and 4 of them were scared of flying.Beats me.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689804</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689920</id>
	<title>Re:Interesting Idea</title>
	<author>Dalambertian</author>
	<datestamp>1270059720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>You've hit the nail on the head. It seems like the human mind has to take some leaps of faith when it comes to everyday tasks. In your example, when a person gets on an airplane for the first time, there is a certain level of doubt. When it's the fifth time, your confidence is probably not based on your knowledge of aerodynamics, but simply your trust in the pilot and the folks at Boeing in addition to the first four successes. Could an abstraction of faith be a requirement for "good" AI?</htmltext>
<tokenext>You 've hit the nail on the head .
It seems like the human mind has to take some leaps of faith when it comes to everyday tasks .
In your example , when a person gets on an airplane for the first time , there is a certain level of doubt .
When it 's the fifth time , your confidence is probably not based on your knowledge of aerodynamics , but simply your trust in the pilot and the folks at Boeing in addition to the first four successes .
Could an abstraction of faith be a requirement for " good " AI ?</tokentext>
<sentencetext>You've hit the nail on the head.
It seems like the human mind has to take some leaps of faith when it comes to everyday tasks.
In your example, when a person gets on an airplane for the first time, there is a certain level of doubt.
When it's the fifth time, your confidence is probably not based on your knowledge of aerodynamics, but simply your trust in the pilot and the folks at Boeing in addition to the first four successes.
Could an abstraction of faith be a requirement for "good" AI?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689236</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689042</id>
	<title>Re:Can I get some wafers with that Wine?</title>
	<author>Anonymous</author>
	<datestamp>1270055820000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>From the article:</p><p><div class="quote"><p>As a research tool, Goodman has developed a computer programming language called Church &mdash; after the great American logician Alonzo Church</p></div><p>

Your comment fits the criteria of Flamebait and Offtopic, but definitely NOT Funny.</p></div>
	</htmltext>
<tokenext>From the article : As a research tool , Goodman has developed a computer programming language called Church    after the great American logician Alonzo Church Your comment fits the criteria of Flamebait and Offtopic , but definitely NOT Funny .</tokentext>
<sentencetext>From the article:As a research tool, Goodman has developed a computer programming language called Church — after the great American logician Alonzo Church

Your comment fits the criteria of Flamebait and Offtopic, but definitely NOT Funny.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688868</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31692774</id>
	<title>Re:Elephant in the Room</title>
	<author>radtea</author>
	<datestamp>1270027440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Ask an AI if the stove is hot. It should respond "I don't know, where is the stove?" Rather AI would try and make an inference based on known data.</p></div><p>The "intelligence" in artificial intelligence is essentially Platonic or Cartesian in conception, a consquence of AI research being dominated by mathematicians (and worse yet, philosophers) rather than scientists.  Scientists know we learn can almost nothing from sitting in a cave and thinking, and we can learn almost anything from interacting with the world.  But mathematicians and philosophers continue to push failed models of intelligence that completely ignore the true source of all knowledge:  our ability to act on the world and observe the consequencs of our actions.</p><p>And AI without effectors is an AI without semantics, because the foundations of meaning are in action and consequence, not inference.</p></div>
	</htmltext>
<tokenext>Ask an AI if the stove is hot .
It should respond " I do n't know , where is the stove ?
" Rather AI would try and make an inference based on known data.The " intelligence " in artificial intelligence is essentially Platonic or Cartesian in conception , a consquence of AI research being dominated by mathematicians ( and worse yet , philosophers ) rather than scientists .
Scientists know we learn can almost nothing from sitting in a cave and thinking , and we can learn almost anything from interacting with the world .
But mathematicians and philosophers continue to push failed models of intelligence that completely ignore the true source of all knowledge : our ability to act on the world and observe the consequencs of our actions.And AI without effectors is an AI without semantics , because the foundations of meaning are in action and consequence , not inference .</tokentext>
<sentencetext>Ask an AI if the stove is hot.
It should respond "I don't know, where is the stove?
" Rather AI would try and make an inference based on known data.The "intelligence" in artificial intelligence is essentially Platonic or Cartesian in conception, a consquence of AI research being dominated by mathematicians (and worse yet, philosophers) rather than scientists.
Scientists know we learn can almost nothing from sitting in a cave and thinking, and we can learn almost anything from interacting with the world.
But mathematicians and philosophers continue to push failed models of intelligence that completely ignore the true source of all knowledge:  our ability to act on the world and observe the consequencs of our actions.And AI without effectors is an AI without semantics, because the foundations of meaning are in action and consequence, not inference.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689628</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688842</id>
	<title>Endless vs. infinite</title>
	<author>MarkoNo5</author>
	<datestamp>1270055040000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>What is the difference between an endless task and an infinite task?</htmltext>
<tokenext>What is the difference between an endless task and an infinite task ?</tokentext>
<sentencetext>What is the difference between an endless task and an infinite task?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31693528</id>
	<title>Re:MIT needs to get their PR department under cont</title>
	<author>bmacs27</author>
	<datestamp>1270030440000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext>M.I.T.  Marketing is Tantamount.</htmltext>
<tokenext>M.I.T .
Marketing is Tantamount .</tokentext>
<sentencetext>M.I.T.
Marketing is Tantamount.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689786</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688888</id>
	<title>The real summary</title>
	<author>Anonymous</author>
	<datestamp>1270055160000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext>Since the actual summary seems to involve a fluff filled soundclip without anything useful, here's the run down of the article.<br>1) We first tried to make AIs that could think like us by inferring new knowledge from existing knowledge.<br>2) It turns out that teaching AIs to infer new ideas is really freaking hard. (Birds can fly because they have wings, mayflies can fly because they have wings, helicopters can... what??)<br>3) We turned to probability based AI creation: you feed the AI a ton of data (training sets) and it can go "based on training data, most helicopters can fly."<br><br>4) This guy, Noah Goodman of MIT, uses inferences with probability: he uses a programming language named "Church" so the computer can go<br>"100\% of birds in training set can fly. Thus, for a new bird there is a 100\% chance it can fly"<br>"Oh ok, penguins can't fly. Given a random bird, 90\% chance it can fly. Given random bird with weight to wing span ratio of 5 or less, 80\% chance." and so on and so forth.<br>5) Using a language that mixes two separate strategies to train AIs, a grand unified theory of ai (lower case) is somehow created.<br><br>6) ???<br>7) When asked if sparrows can fly, the AI asks if it's a European sparrow or an African sparrow, and Skynet ensues.</htmltext>
<tokenext>Since the actual summary seems to involve a fluff filled soundclip without anything useful , here 's the run down of the article.1 ) We first tried to make AIs that could think like us by inferring new knowledge from existing knowledge.2 ) It turns out that teaching AIs to infer new ideas is really freaking hard .
( Birds can fly because they have wings , mayflies can fly because they have wings , helicopters can.. .
what ? ? ) 3 ) We turned to probability based AI creation : you feed the AI a ton of data ( training sets ) and it can go " based on training data , most helicopters can fly .
" 4 ) This guy , Noah Goodman of MIT , uses inferences with probability : he uses a programming language named " Church " so the computer can go " 100 \ % of birds in training set can fly .
Thus , for a new bird there is a 100 \ % chance it can fly " " Oh ok , penguins ca n't fly .
Given a random bird , 90 \ % chance it can fly .
Given random bird with weight to wing span ratio of 5 or less , 80 \ % chance .
" and so on and so forth.5 ) Using a language that mixes two separate strategies to train AIs , a grand unified theory of ai ( lower case ) is somehow created.6 ) ? ?
? 7 ) When asked if sparrows can fly , the AI asks if it 's a European sparrow or an African sparrow , and Skynet ensues .</tokentext>
<sentencetext>Since the actual summary seems to involve a fluff filled soundclip without anything useful, here's the run down of the article.1) We first tried to make AIs that could think like us by inferring new knowledge from existing knowledge.2) It turns out that teaching AIs to infer new ideas is really freaking hard.
(Birds can fly because they have wings, mayflies can fly because they have wings, helicopters can...
what??)3) We turned to probability based AI creation: you feed the AI a ton of data (training sets) and it can go "based on training data, most helicopters can fly.
"4) This guy, Noah Goodman of MIT, uses inferences with probability: he uses a programming language named "Church" so the computer can go"100\% of birds in training set can fly.
Thus, for a new bird there is a 100\% chance it can fly""Oh ok, penguins can't fly.
Given a random bird, 90\% chance it can fly.
Given random bird with weight to wing span ratio of 5 or less, 80\% chance.
" and so on and so forth.5) Using a language that mixes two separate strategies to train AIs, a grand unified theory of ai (lower case) is somehow created.6) ??
?7) When asked if sparrows can fly, the AI asks if it's a European sparrow or an African sparrow, and Skynet ensues.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689678</id>
	<title>Church programming language is Scheme</title>
	<author>Anonymous</author>
	<datestamp>1270058520000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I tried to Google about <em>Church programming language</em>, and results were rather poor as one might imagine.

</p><p>Then I found out the <a href="http://projects.csail.mit.edu/church/wiki/Simple\_Generative\_Models" title="mit.edu" rel="nofollow">MIT wiki link where the code is stashed</a> [mit.edu]. It seems to be Scheme with some twist I'm not yet aware of though. The wiki seems to be a good introduction to Scheme also, as it starts from basics.</p></htmltext>
<tokenext>I tried to Google about Church programming language , and results were rather poor as one might imagine .
Then I found out the MIT wiki link where the code is stashed [ mit.edu ] .
It seems to be Scheme with some twist I 'm not yet aware of though .
The wiki seems to be a good introduction to Scheme also , as it starts from basics .</tokentext>
<sentencetext>I tried to Google about Church programming language, and results were rather poor as one might imagine.
Then I found out the MIT wiki link where the code is stashed [mit.edu].
It seems to be Scheme with some twist I'm not yet aware of though.
The wiki seems to be a good introduction to Scheme also, as it starts from basics.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690038</id>
	<title>Re:This looks familiar</title>
	<author>welcher</author>
	<datestamp>1270060140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>It's not just lisp.  From a paper describing the language: <em>"Church is based on the Lisp model
of lambda calculus, containing a pure Lisp
as its deterministic subset. The semantics of
Church is defined in terms of evaluation histories and conditional distributions on such
histories. Church also includes a novel language construct, the stochastic memomizer,
which enables simple description of many
complex non-parametric models."</em>  This extension is the crucial part, as it makes it easy to encode probabilistic statements and do probabilistic inference.</htmltext>
<tokenext>It 's not just lisp .
From a paper describing the language : " Church is based on the Lisp model of lambda calculus , containing a pure Lisp as its deterministic subset .
The semantics of Church is defined in terms of evaluation histories and conditional distributions on such histories .
Church also includes a novel language construct , the stochastic memomizer , which enables simple description of many complex non-parametric models .
" This extension is the crucial part , as it makes it easy to encode probabilistic statements and do probabilistic inference .</tokentext>
<sentencetext>It's not just lisp.
From a paper describing the language: "Church is based on the Lisp model
of lambda calculus, containing a pure Lisp
as its deterministic subset.
The semantics of
Church is defined in terms of evaluation histories and conditional distributions on such
histories.
Church also includes a novel language construct, the stochastic memomizer,
which enables simple description of many
complex non-parametric models.
"  This extension is the crucial part, as it makes it easy to encode probabilistic statements and do probabilistic inference.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689128</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690160</id>
	<title>Re:Can I get some wafers with that Wine?</title>
	<author>4D6963</author>
	<datestamp>1270060560000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>0</modscore>
	<htmltext><p>Wow, butthurt at church much? Makes you wonder why. Or perhaps not...</p></htmltext>
<tokenext>Wow , butthurt at church much ?
Makes you wonder why .
Or perhaps not.. .</tokentext>
<sentencetext>Wow, butthurt at church much?
Makes you wonder why.
Or perhaps not...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688868</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689858</id>
	<title>Re:Endless vs. infinite</title>
	<author>astar</author>
	<datestamp>1270059420000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>a common treatment of the universe is finite, but unbounded</p></htmltext>
<tokenext>a common treatment of the universe is finite , but unbounded</tokentext>
<sentencetext>a common treatment of the universe is finite, but unbounded</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688930</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689690</id>
	<title>Re:Interesting Idea</title>
	<author>Anonymous</author>
	<datestamp>1270058580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Any AI must be able to learn. A 5 year old wouldn't know about a 747 or flightless bird but a 12 year old probably would. Presumably, the AI is trying to model an intelligent, reasonably-educated adult.</htmltext>
<tokenext>Any AI must be able to learn .
A 5 year old would n't know about a 747 or flightless bird but a 12 year old probably would .
Presumably , the AI is trying to model an intelligent , reasonably-educated adult .</tokentext>
<sentencetext>Any AI must be able to learn.
A 5 year old wouldn't know about a 747 or flightless bird but a 12 year old probably would.
Presumably, the AI is trying to model an intelligent, reasonably-educated adult.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689834</id>
	<title>It's Batman's Utility Belt All Over Again</title>
	<author>eldavojohn</author>
	<datestamp>1270059300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>My point was that he gave an example and the system conveniently already knew that when you assign something a weight of 200 lbs then it should reduce your assumption that it flies.  He didn't say by how much or how this information was ever gleaned, just that it was conveniently there and adjusted the answer in the right direction!  <br> <br>

A cassowary is a thing and an animal and a bird.  Sometimes people call airplanes 'birds.'  So if you learned blindly from literature, you could run into all sorts of problems.  It's a danger you run if you learn and adjust these variables while following an ontology.  <br> <br>

The fact is that if I thought up something, you would come up with the common sense logic to solve it and then wave your hand that it was already in the repository of knowledge (rule or probability or what have you) to solve the problem.  <br> <br>

What I'm trying to tell you is that I've studied predicate calculus and prolog and various methods to achieve this.  The problem isn't the system, the problem is replicating a human life (or even 18 years) of knowledge into whatever form is machine interpretable and this solution falls prey to these problems.<p><div class="quote"><p>This is very promising.</p></div><p>Are you working in this field?  This language has been around since 2008.  How prevalent is it?  Even the professor doing the research notes its pitfalls and expensive computations!</p><p><div class="quote"><p>OR robotic systems used in manufacturing able to adjust the process as it goes. Using inputs to determine better ways to do a job.</p></div><p>Dangerous simplistic thinking.  Adjusting a processing real time is never done.  It's simulated in software first.  You are being a science fiction author.</p><p><div class="quote"><p>It looks like this system can change as it is used, effectivly creating a 'lifetime' experience.</p></div><p>If it's that easy, then do it.  You will be the richest man alive before you die.  That is, if you complete your project before you die<nobr> <wbr></nobr>:)</p></div>
	</htmltext>
<tokenext>My point was that he gave an example and the system conveniently already knew that when you assign something a weight of 200 lbs then it should reduce your assumption that it flies .
He did n't say by how much or how this information was ever gleaned , just that it was conveniently there and adjusted the answer in the right direction !
A cassowary is a thing and an animal and a bird .
Sometimes people call airplanes 'birds .
' So if you learned blindly from literature , you could run into all sorts of problems .
It 's a danger you run if you learn and adjust these variables while following an ontology .
The fact is that if I thought up something , you would come up with the common sense logic to solve it and then wave your hand that it was already in the repository of knowledge ( rule or probability or what have you ) to solve the problem .
What I 'm trying to tell you is that I 've studied predicate calculus and prolog and various methods to achieve this .
The problem is n't the system , the problem is replicating a human life ( or even 18 years ) of knowledge into whatever form is machine interpretable and this solution falls prey to these problems.This is very promising.Are you working in this field ?
This language has been around since 2008 .
How prevalent is it ?
Even the professor doing the research notes its pitfalls and expensive computations ! OR robotic systems used in manufacturing able to adjust the process as it goes .
Using inputs to determine better ways to do a job.Dangerous simplistic thinking .
Adjusting a processing real time is never done .
It 's simulated in software first .
You are being a science fiction author.It looks like this system can change as it is used , effectivly creating a 'lifetime ' experience.If it 's that easy , then do it .
You will be the richest man alive before you die .
That is , if you complete your project before you die : )</tokentext>
<sentencetext>My point was that he gave an example and the system conveniently already knew that when you assign something a weight of 200 lbs then it should reduce your assumption that it flies.
He didn't say by how much or how this information was ever gleaned, just that it was conveniently there and adjusted the answer in the right direction!
A cassowary is a thing and an animal and a bird.
Sometimes people call airplanes 'birds.
'  So if you learned blindly from literature, you could run into all sorts of problems.
It's a danger you run if you learn and adjust these variables while following an ontology.
The fact is that if I thought up something, you would come up with the common sense logic to solve it and then wave your hand that it was already in the repository of knowledge (rule or probability or what have you) to solve the problem.
What I'm trying to tell you is that I've studied predicate calculus and prolog and various methods to achieve this.
The problem isn't the system, the problem is replicating a human life (or even 18 years) of knowledge into whatever form is machine interpretable and this solution falls prey to these problems.This is very promising.Are you working in this field?
This language has been around since 2008.
How prevalent is it?
Even the professor doing the research notes its pitfalls and expensive computations!OR robotic systems used in manufacturing able to adjust the process as it goes.
Using inputs to determine better ways to do a job.Dangerous simplistic thinking.
Adjusting a processing real time is never done.
It's simulated in software first.
You are being a science fiction author.It looks like this system can change as it is used, effectivly creating a 'lifetime' experience.If it's that easy, then do it.
You will be the richest man alive before you die.
That is, if you complete your project before you die :)
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688976</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31694052</id>
	<title>Re:This looks familiar</title>
	<author>thechao</author>
	<datestamp>1270033260000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>The only reason Scheme or Lisp can do so much is because they were originally written in Emacs.</p></htmltext>
<tokenext>The only reason Scheme or Lisp can do so much is because they were originally written in Emacs .</tokentext>
<sentencetext>The only reason Scheme or Lisp can do so much is because they were originally written in Emacs.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689128</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690822</id>
	<title>Impractical, and nothing like human intelligence</title>
	<author>Anonymous</author>
	<datestamp>1270063260000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>This project is very fluffed-up and only works in limited settings with horrible runtime. Imagine a program that included a probabilistic element such as (true if coin flip is heads), where the coin could be biased to some probability p. Church lets you write a program using such elements, and then when you feed it data it can infer those parameters p in your program. The problem with this approach is that it requires tons and tons of sampling (MCMC on the space of possible programs (including recursion) with varied parameters).</p><p>We know that humans do not do random sampling to create a hierarchy of knowledge. Noah Goodman et al. (authors of this method) tried to run a workshop at a major AI conference asking whether the brain does this kind of random sampling. The resounding response from participants was no, it does not. The only thing that justifies the fact that these guys work in the Brain and Cognitive Sciences dept. is that they run psychological studies that validate the bayesian behavior of humans in limited scenarios. They don't actually study the structure of the brain; they only guess based on its macroscopic behavior. The implied claim is that since their computer model appears to have the same bar chart as humans, that they have captured some fundamental aspect of human intelligence. This could not be further from the truth.</p><p>Human intelligence is not merely an end-to-end phenomenon, it is an amazing capacity to make sense of an infinite stream of data using greatly constrained spatial resources in real time. If you tell me that intelligence is captured in an infinitely-recursive LISP program, I'll ask you how you create concepts from the ground-up over time. Infinite recursion is a sexy selling-point, but how do you actually implement this? How do you learn that letters compose words which compose sentences and so on? How do you reasonably capture knowledge which is more than two or three levels deep? Not with random sampling. We already know that the brain is more frugal than this.</p></htmltext>
<tokenext>This project is very fluffed-up and only works in limited settings with horrible runtime .
Imagine a program that included a probabilistic element such as ( true if coin flip is heads ) , where the coin could be biased to some probability p. Church lets you write a program using such elements , and then when you feed it data it can infer those parameters p in your program .
The problem with this approach is that it requires tons and tons of sampling ( MCMC on the space of possible programs ( including recursion ) with varied parameters ) .We know that humans do not do random sampling to create a hierarchy of knowledge .
Noah Goodman et al .
( authors of this method ) tried to run a workshop at a major AI conference asking whether the brain does this kind of random sampling .
The resounding response from participants was no , it does not .
The only thing that justifies the fact that these guys work in the Brain and Cognitive Sciences dept .
is that they run psychological studies that validate the bayesian behavior of humans in limited scenarios .
They do n't actually study the structure of the brain ; they only guess based on its macroscopic behavior .
The implied claim is that since their computer model appears to have the same bar chart as humans , that they have captured some fundamental aspect of human intelligence .
This could not be further from the truth.Human intelligence is not merely an end-to-end phenomenon , it is an amazing capacity to make sense of an infinite stream of data using greatly constrained spatial resources in real time .
If you tell me that intelligence is captured in an infinitely-recursive LISP program , I 'll ask you how you create concepts from the ground-up over time .
Infinite recursion is a sexy selling-point , but how do you actually implement this ?
How do you learn that letters compose words which compose sentences and so on ?
How do you reasonably capture knowledge which is more than two or three levels deep ?
Not with random sampling .
We already know that the brain is more frugal than this .</tokentext>
<sentencetext>This project is very fluffed-up and only works in limited settings with horrible runtime.
Imagine a program that included a probabilistic element such as (true if coin flip is heads), where the coin could be biased to some probability p. Church lets you write a program using such elements, and then when you feed it data it can infer those parameters p in your program.
The problem with this approach is that it requires tons and tons of sampling (MCMC on the space of possible programs (including recursion) with varied parameters).We know that humans do not do random sampling to create a hierarchy of knowledge.
Noah Goodman et al.
(authors of this method) tried to run a workshop at a major AI conference asking whether the brain does this kind of random sampling.
The resounding response from participants was no, it does not.
The only thing that justifies the fact that these guys work in the Brain and Cognitive Sciences dept.
is that they run psychological studies that validate the bayesian behavior of humans in limited scenarios.
They don't actually study the structure of the brain; they only guess based on its macroscopic behavior.
The implied claim is that since their computer model appears to have the same bar chart as humans, that they have captured some fundamental aspect of human intelligence.
This could not be further from the truth.Human intelligence is not merely an end-to-end phenomenon, it is an amazing capacity to make sense of an infinite stream of data using greatly constrained spatial resources in real time.
If you tell me that intelligence is captured in an infinitely-recursive LISP program, I'll ask you how you create concepts from the ground-up over time.
Infinite recursion is a sexy selling-point, but how do you actually implement this?
How do you learn that letters compose words which compose sentences and so on?
How do you reasonably capture knowledge which is more than two or three levels deep?
Not with random sampling.
We already know that the brain is more frugal than this.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690262</id>
	<title>Example: prime number seeking</title>
	<author>Anonymous</author>
	<datestamp>1270061040000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>An example of an endless task is listing prime numbers. You can find any number of prime numbers (and there are an infinite number of them), and each new prime number found is an 'end point' of the task, but there will always be more endpoints.</p></htmltext>
<tokenext>An example of an endless task is listing prime numbers .
You can find any number of prime numbers ( and there are an infinite number of them ) , and each new prime number found is an 'end point ' of the task , but there will always be more endpoints .</tokentext>
<sentencetext>An example of an endless task is listing prime numbers.
You can find any number of prime numbers (and there are an infinite number of them), and each new prime number found is an 'end point' of the task, but there will always be more endpoints.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688842</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31692510</id>
	<title>Re:The real summary</title>
	<author>BitZtream</author>
	<datestamp>1270026480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>combine something like probabilistic inference with something more like logical inference</p></div></blockquote><p>The problem is, they really are one and the same, until they start being treated as such, problems will abound.</p><p>The brain is a REALLY REALLY simple device that is MASSIVELY interconnected and parallel.</p><p>Its really not hard to simulate a neuron and we've been doing it for years.</p><p>Now emulating billions of them working in parallel at any sort of useful speed<nobr> <wbr></nobr>... well thats something that only gets accomplished in organic brains.</p></div>
	</htmltext>
<tokenext>combine something like probabilistic inference with something more like logical inferenceThe problem is , they really are one and the same , until they start being treated as such , problems will abound.The brain is a REALLY REALLY simple device that is MASSIVELY interconnected and parallel.Its really not hard to simulate a neuron and we 've been doing it for years.Now emulating billions of them working in parallel at any sort of useful speed ... well thats something that only gets accomplished in organic brains .</tokentext>
<sentencetext>combine something like probabilistic inference with something more like logical inferenceThe problem is, they really are one and the same, until they start being treated as such, problems will abound.The brain is a REALLY REALLY simple device that is MASSIVELY interconnected and parallel.Its really not hard to simulate a neuron and we've been doing it for years.Now emulating billions of them working in parallel at any sort of useful speed ... well thats something that only gets accomplished in organic brains.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689126</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689028</id>
	<title>Who is Al?</title>
	<author>celibate for life</author>
	<datestamp>1270055760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>And why is his theory so grand?</htmltext>
<tokenext>And why is his theory so grand ?</tokentext>
<sentencetext>And why is his theory so grand?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31693178</id>
	<title>Re:Elephant in the Room</title>
	<author>recharged95</author>
	<datestamp>1270029000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>"if a Bat is a mammal and can fly can a squirrel"
<br>
<br>
<br>
If it's ever fed squirrels <i>roasted</i> peanuts in central park NYC, then the answer would be Yes.</htmltext>
<tokenext>" if a Bat is a mammal and can fly can a squirrel " If it 's ever fed squirrels roasted peanuts in central park NYC , then the answer would be Yes .</tokentext>
<sentencetext>"if a Bat is a mammal and can fly can a squirrel"



If it's ever fed squirrels roasted peanuts in central park NYC, then the answer would be Yes.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689628</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689222</id>
	<title>Re:Grand unified Hyperbole of AI</title>
	<author>Fnkmaster</author>
	<datestamp>1270056600000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>AI used to be the subfield of computer science that developed cool algorithms and hyped itself grandly.  Five years later, the rest of the field would be using these algorithms to solve actual problems, without the grandiose hype.</p><p>These days, I'm not sure if AI is even that.  But maybe some of this stuff will prove to be useful.  You just have to put on your hype filter whenever "AI" is involved.</p></htmltext>
<tokenext>AI used to be the subfield of computer science that developed cool algorithms and hyped itself grandly .
Five years later , the rest of the field would be using these algorithms to solve actual problems , without the grandiose hype.These days , I 'm not sure if AI is even that .
But maybe some of this stuff will prove to be useful .
You just have to put on your hype filter whenever " AI " is involved .</tokentext>
<sentencetext>AI used to be the subfield of computer science that developed cool algorithms and hyped itself grandly.
Five years later, the rest of the field would be using these algorithms to solve actual problems, without the grandiose hype.These days, I'm not sure if AI is even that.
But maybe some of this stuff will prove to be useful.
You just have to put on your hype filter whenever "AI" is involved.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688924</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689988</id>
	<title>Re:Elephant in the Room</title>
	<author>astar</author>
	<datestamp>1270060020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>if i thought i could build an ai, I would start by giving a computer system control of the world's physical production.  I would observe say electricity getting short and then see if the computer system build a fusion reactor.  the ai is not going to be skynet</p></htmltext>
<tokenext>if i thought i could build an ai , I would start by giving a computer system control of the world 's physical production .
I would observe say electricity getting short and then see if the computer system build a fusion reactor .
the ai is not going to be skynet</tokentext>
<sentencetext>if i thought i could build an ai, I would start by giving a computer system control of the world's physical production.
I would observe say electricity getting short and then see if the computer system build a fusion reactor.
the ai is not going to be skynet</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689628</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688938</id>
	<title>Terrible Summary</title>
	<author>Anonymous</author>
	<datestamp>1270055400000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>When you use the phrase "Grand Unified Theory" you better have something impressive to show me.</p></htmltext>
<tokenext>When you use the phrase " Grand Unified Theory " you better have something impressive to show me .</tokentext>
<sentencetext>When you use the phrase "Grand Unified Theory" you better have something impressive to show me.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690384</id>
	<title>Re:Interesting Idea</title>
	<author>geekoid</author>
	<datestamp>1270061640000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext><p>Ships float because wood floats, and you make a ship from wood. Once you have made a ship from wood, then logically ALL ships can float. So then you can make them out of steel.<br>Q.E.D.</p></htmltext>
<tokenext>Ships float because wood floats , and you make a ship from wood .
Once you have made a ship from wood , then logically ALL ships can float .
So then you can make them out of steel.Q.E.D .</tokentext>
<sentencetext>Ships float because wood floats, and you make a ship from wood.
Once you have made a ship from wood, then logically ALL ships can float.
So then you can make them out of steel.Q.E.D.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689236</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31697348</id>
	<title>Re:Terrible Summary</title>
	<author>mjwx</author>
	<datestamp>1270056600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>When you use the phrase "Grand Unified Theory" you better have something impressive to show me.</p></div></blockquote><p>

Considering the acronym is GUT, he may indeed have something impressive to show you.</p></div>
	</htmltext>
<tokenext>When you use the phrase " Grand Unified Theory " you better have something impressive to show me .
Considering the acronym is GUT , he may indeed have something impressive to show you .</tokentext>
<sentencetext>When you use the phrase "Grand Unified Theory" you better have something impressive to show me.
Considering the acronym is GUT, he may indeed have something impressive to show you.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688938</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690102</id>
	<title>Re:The real summary</title>
	<author>alewar</author>
	<datestamp>1270060320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I sounds like <a href="http://en.wikipedia.org/wiki/Default\_logic" title="wikipedia.org">Defaul Logic</a> [wikipedia.org]</htmltext>
<tokenext>I sounds like Defaul Logic [ wikipedia.org ]</tokentext>
<sentencetext>I sounds like Defaul Logic [wikipedia.org]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688888</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688850</id>
	<title>NO NO let me make up the rest of the Story</title>
	<author>Bat Dude</author>
	<datestamp>1270055040000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext>Sounds a bit like a journalists brain to me<nobr> <wbr></nobr>... NO NO let me make up the rest of the Story</htmltext>
<tokenext>Sounds a bit like a journalists brain to me ... NO NO let me make up the rest of the Story</tokentext>
<sentencetext>Sounds a bit like a journalists brain to me ... NO NO let me make up the rest of the Story</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689236</id>
	<title>Re:Interesting Idea</title>
	<author>digitaldrunkenmonk</author>
	<datestamp>1270056660000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>The first time I saw an airplane, I didn't think the damn thing could fly. I mean, hell, look at it! It's huge! By the same token, how can a ship float? Before I took some basic physics, it was impossible in my mind, yet it occurred. An AI doesn't mean it comes equipped with the sum of human knowledge; it means it simulates the human mind. If I learned that a bird was over 200 lbs before seeing the bird, I'd honestly expect that fat son of a bitch to fall right out of the sky.</p><p>If you were unfamiliar with the concept of ships or planes, and someone told you that a 50,000 ton vessel could float, would you really believe that without seeing it? Or that a 150 ton contraption could fly?</p><p>Humans have a problem dealing with that. Heavy things fall. Heavy things sink. To ask an AI modeled after a human mind to intuitively understand the intricacies of bouyancy is asking too much.</p></htmltext>
<tokenext>The first time I saw an airplane , I did n't think the damn thing could fly .
I mean , hell , look at it !
It 's huge !
By the same token , how can a ship float ?
Before I took some basic physics , it was impossible in my mind , yet it occurred .
An AI does n't mean it comes equipped with the sum of human knowledge ; it means it simulates the human mind .
If I learned that a bird was over 200 lbs before seeing the bird , I 'd honestly expect that fat son of a bitch to fall right out of the sky.If you were unfamiliar with the concept of ships or planes , and someone told you that a 50,000 ton vessel could float , would you really believe that without seeing it ?
Or that a 150 ton contraption could fly ? Humans have a problem dealing with that .
Heavy things fall .
Heavy things sink .
To ask an AI modeled after a human mind to intuitively understand the intricacies of bouyancy is asking too much .</tokentext>
<sentencetext>The first time I saw an airplane, I didn't think the damn thing could fly.
I mean, hell, look at it!
It's huge!
By the same token, how can a ship float?
Before I took some basic physics, it was impossible in my mind, yet it occurred.
An AI doesn't mean it comes equipped with the sum of human knowledge; it means it simulates the human mind.
If I learned that a bird was over 200 lbs before seeing the bird, I'd honestly expect that fat son of a bitch to fall right out of the sky.If you were unfamiliar with the concept of ships or planes, and someone told you that a 50,000 ton vessel could float, would you really believe that without seeing it?
Or that a 150 ton contraption could fly?Humans have a problem dealing with that.
Heavy things fall.
Heavy things sink.
To ask an AI modeled after a human mind to intuitively understand the intricacies of bouyancy is asking too much.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688968</id>
	<title>Same old...</title>
	<author>Anonymous</author>
	<datestamp>1270055520000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>From what I have seen 99\% of AI research is only aiming to mimic AI.</p><p>From what I can tell this approach doesn't unite the field but instead tries to legitimize the 99\%. In my opinion, that's a dead end.</p></htmltext>
<tokenext>From what I have seen 99 \ % of AI research is only aiming to mimic AI.From what I can tell this approach does n't unite the field but instead tries to legitimize the 99 \ % .
In my opinion , that 's a dead end .</tokentext>
<sentencetext>From what I have seen 99\% of AI research is only aiming to mimic AI.From what I can tell this approach doesn't unite the field but instead tries to legitimize the 99\%.
In my opinion, that's a dead end.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689158</id>
	<title>Re:That is very interesting</title>
	<author>K. S. Kyosuke</author>
	<datestamp>1270056300000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext>Possibly some problems in your childhood are related to this.</htmltext>
<tokenext>Possibly some problems in your childhood are related to this .</tokentext>
<sentencetext>Possibly some problems in your childhood are related to this.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688818</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689030</id>
	<title>Re:Endless vs. infinite</title>
	<author>geekoid</author>
	<datestamp>1270055760000</datestamp>
	<modclass>None</modclass>
	<modscore>2</modscore>
	<htmltext><p>and endless task is just the same thing over and over again. and Infinite task goes on because of changes in variable and growing experience.</p><p>So you can just write downs a list of things and say 'go thought this list', but if the list changes because you are working on the list, then it's infinite.</p><p>At least that's how it reads in the context he used it.</p></htmltext>
<tokenext>and endless task is just the same thing over and over again .
and Infinite task goes on because of changes in variable and growing experience.So you can just write downs a list of things and say 'go thought this list ' , but if the list changes because you are working on the list , then it 's infinite.At least that 's how it reads in the context he used it .</tokentext>
<sentencetext>and endless task is just the same thing over and over again.
and Infinite task goes on because of changes in variable and growing experience.So you can just write downs a list of things and say 'go thought this list', but if the list changes because you are working on the list, then it's infinite.At least that's how it reads in the context he used it.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688842</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689608</id>
	<title>Re:The real summary</title>
	<author>nine-times</author>
	<datestamp>1270058220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>4) This guy, Noah Goodman of MIT, uses inferences with probability: he uses a programming language named "Church" so the computer can go
"100\% of birds in training set can fly. Thus, for a new bird there is a 100\% chance it can fly"
"Oh ok, penguins can't fly. Given a random bird, 90\% chance it can fly. Given random bird with weight to wing span ratio of 5 or less, 80\% chance." and so on and so forth.
5) Using a language that mixes two separate strategies to train AIs, a grand unified theory of ai (lower case) is somehow created.</p></div><p>In my mind, you don't get to call it "AI" until, after feeding the computer information on thousands of birds and asks it whether penguins can fly, it responds, "I guess, probably.  But look, I don't care about birds.  What makes you think I care about birds?  Tell me about that sexy printer you have over there.  I'd like to plug into her USB port."
</p><p>You think I'm joking.  You hope I'm joking.  I'm not joking.</p></div>
	</htmltext>
<tokenext>4 ) This guy , Noah Goodman of MIT , uses inferences with probability : he uses a programming language named " Church " so the computer can go " 100 \ % of birds in training set can fly .
Thus , for a new bird there is a 100 \ % chance it can fly " " Oh ok , penguins ca n't fly .
Given a random bird , 90 \ % chance it can fly .
Given random bird with weight to wing span ratio of 5 or less , 80 \ % chance .
" and so on and so forth .
5 ) Using a language that mixes two separate strategies to train AIs , a grand unified theory of ai ( lower case ) is somehow created.In my mind , you do n't get to call it " AI " until , after feeding the computer information on thousands of birds and asks it whether penguins can fly , it responds , " I guess , probably .
But look , I do n't care about birds .
What makes you think I care about birds ?
Tell me about that sexy printer you have over there .
I 'd like to plug into her USB port .
" You think I 'm joking .
You hope I 'm joking .
I 'm not joking .</tokentext>
<sentencetext>4) This guy, Noah Goodman of MIT, uses inferences with probability: he uses a programming language named "Church" so the computer can go
"100\% of birds in training set can fly.
Thus, for a new bird there is a 100\% chance it can fly"
"Oh ok, penguins can't fly.
Given a random bird, 90\% chance it can fly.
Given random bird with weight to wing span ratio of 5 or less, 80\% chance.
" and so on and so forth.
5) Using a language that mixes two separate strategies to train AIs, a grand unified theory of ai (lower case) is somehow created.In my mind, you don't get to call it "AI" until, after feeding the computer information on thousands of birds and asks it whether penguins can fly, it responds, "I guess, probably.
But look, I don't care about birds.
What makes you think I care about birds?
Tell me about that sexy printer you have over there.
I'd like to plug into her USB port.
"
You think I'm joking.
You hope I'm joking.
I'm not joking.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688888</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690286</id>
	<title>Re:Endless vs. infinite</title>
	<author>Hurricane78</author>
	<datestamp>1270061160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>My understanding is that an endless task is finite at any point in time, but continues to grow for eternity.</p></div><p>My understanding is, that there is no such thing as infinity, but only endlessness. And that in mathematics, infinity and zero should be redefined as endlessness, with a time coordinate attached to it. This would solve division trough zero beautifully.<br>In my opinion, in mathematics, every set of operations in sequenced, and hence you can not have a complete theory, without including this, and its result: A time-line.<br>If you try to work with infinity in something like Haskell, you get this exact result. Infinity equals an endless sequence of operations. Make it into a list of partial results, and you got your time-line.</p></div>
	</htmltext>
<tokenext>My understanding is that an endless task is finite at any point in time , but continues to grow for eternity.My understanding is , that there is no such thing as infinity , but only endlessness .
And that in mathematics , infinity and zero should be redefined as endlessness , with a time coordinate attached to it .
This would solve division trough zero beautifully.In my opinion , in mathematics , every set of operations in sequenced , and hence you can not have a complete theory , without including this , and its result : A time-line.If you try to work with infinity in something like Haskell , you get this exact result .
Infinity equals an endless sequence of operations .
Make it into a list of partial results , and you got your time-line .</tokentext>
<sentencetext>My understanding is that an endless task is finite at any point in time, but continues to grow for eternity.My understanding is, that there is no such thing as infinity, but only endlessness.
And that in mathematics, infinity and zero should be redefined as endlessness, with a time coordinate attached to it.
This would solve division trough zero beautifully.In my opinion, in mathematics, every set of operations in sequenced, and hence you can not have a complete theory, without including this, and its result: A time-line.If you try to work with infinity in something like Haskell, you get this exact result.
Infinity equals an endless sequence of operations.
Make it into a list of partial results, and you got your time-line.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688930</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690522</id>
	<title>Re:New input for the system</title>
	<author>khellendros1984</author>
	<datestamp>1270062180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>"The pig go. Go is to the fountain. The pig put foot. Grunt. Foot in what? ketchup. The dove fly. Fly is in sky. The dove drop something. The something on the pig. The pig disgusting. The pig rattle. Rattle with dove. The dove angry. The pig leave. The dove produce. Produce is chicken wing. With wing bark. No Quack."</htmltext>
<tokenext>" The pig go .
Go is to the fountain .
The pig put foot .
Grunt. Foot in what ?
ketchup. The dove fly .
Fly is in sky .
The dove drop something .
The something on the pig .
The pig disgusting .
The pig rattle .
Rattle with dove .
The dove angry .
The pig leave .
The dove produce .
Produce is chicken wing .
With wing bark .
No Quack .
"</tokentext>
<sentencetext>"The pig go.
Go is to the fountain.
The pig put foot.
Grunt. Foot in what?
ketchup. The dove fly.
Fly is in sky.
The dove drop something.
The something on the pig.
The pig disgusting.
The pig rattle.
Rattle with dove.
The dove angry.
The pig leave.
The dove produce.
Produce is chicken wing.
With wing bark.
No Quack.
"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688902</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688994</id>
	<title>bad summary</title>
	<author>Anonymous</author>
	<datestamp>1270055640000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The summary reads like it was written by a 14 year old. Without reading the article, it is completely unclear what "this approach" is, how this cognitive model is different, and what "the Church" is. I know, read the article; but why would I if the summary makes me confused instead of curious?</p></htmltext>
<tokenext>The summary reads like it was written by a 14 year old .
Without reading the article , it is completely unclear what " this approach " is , how this cognitive model is different , and what " the Church " is .
I know , read the article ; but why would I if the summary makes me confused instead of curious ?</tokentext>
<sentencetext>The summary reads like it was written by a 14 year old.
Without reading the article, it is completely unclear what "this approach" is, how this cognitive model is different, and what "the Church" is.
I know, read the article; but why would I if the summary makes me confused instead of curious?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31691216</id>
	<title>Re:Basically...</title>
	<author>Chris Burke</author>
	<datestamp>1270064760000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Pretty much.</p><p>The pragmatic answer to the <a href="http://en.wikipedia.org/wiki/Chinese\_room" title="wikipedia.org">Chinese Room</a> [wikipedia.org] problem is "Who gives a fuck?    There's no way to prove that our own brains aren't basically Chinese Rooms, so if the only difference between a human intelligence and an artificial one is that we know how the artificial one works, why does it matter?"</p><p>But really, identifying patterns, and then inferring further information from the rules those patterns imply, is a pretty good behavior.</p></htmltext>
<tokenext>Pretty much.The pragmatic answer to the Chinese Room [ wikipedia.org ] problem is " Who gives a fuck ?
There 's no way to prove that our own brains are n't basically Chinese Rooms , so if the only difference between a human intelligence and an artificial one is that we know how the artificial one works , why does it matter ?
" But really , identifying patterns , and then inferring further information from the rules those patterns imply , is a pretty good behavior .</tokentext>
<sentencetext>Pretty much.The pragmatic answer to the Chinese Room [wikipedia.org] problem is "Who gives a fuck?
There's no way to prove that our own brains aren't basically Chinese Rooms, so if the only difference between a human intelligence and an artificial one is that we know how the artificial one works, why does it matter?
"But really, identifying patterns, and then inferring further information from the rules those patterns imply, is a pretty good behavior.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689392</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689652</id>
	<title>Re:Grand Unified Theory of AI? Hardly.</title>
	<author>cadrell0</author>
	<datestamp>1270058460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>"noah goodman ai church syntax" gives <a href="http://www.mit.edu/~ndg/" title="mit.edu" rel="nofollow">http://www.mit.edu/~ndg/</a> [mit.edu] as the first result.
There is a link near the top to <a href="http://projects.csail.mit.edu/church/wiki/Church" title="mit.edu" rel="nofollow">http://projects.csail.mit.edu/church/wiki/Church</a> [mit.edu]</htmltext>
<tokenext>" noah goodman ai church syntax " gives http : //www.mit.edu/ ~ ndg/ [ mit.edu ] as the first result .
There is a link near the top to http : //projects.csail.mit.edu/church/wiki/Church [ mit.edu ]</tokentext>
<sentencetext>"noah goodman ai church syntax" gives http://www.mit.edu/~ndg/ [mit.edu] as the first result.
There is a link near the top to http://projects.csail.mit.edu/church/wiki/Church [mit.edu]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689168</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690302</id>
	<title>Re:Endless vs. infinite</title>
	<author>elfprince13</author>
	<datestamp>1270061280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>In particular the gap between countable infinities and uncountable infinities.</htmltext>
<tokenext>In particular the gap between countable infinities and uncountable infinities .</tokentext>
<sentencetext>In particular the gap between countable infinities and uncountable infinities.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689960</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690798</id>
	<title>Re:New input for the system</title>
	<author>wirelessbuzzers</author>
	<datestamp>1270063200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>More people have been to Germany than I have.</p></htmltext>
<tokenext>More people have been to Germany than I have .</tokentext>
<sentencetext>More people have been to Germany than I have.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688902</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690042</id>
	<title>Why unify failed approaches?</title>
	<author>Anonymous</author>
	<datestamp>1270060140000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The stagnant field of AI doesn't need to "unify" its failed approaches to modeling human thought, what it needs is something new and revolutionary.</p></htmltext>
<tokenext>The stagnant field of AI does n't need to " unify " its failed approaches to modeling human thought , what it needs is something new and revolutionary .</tokentext>
<sentencetext>The stagnant field of AI doesn't need to "unify" its failed approaches to modeling human thought, what it needs is something new and revolutionary.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689026</id>
	<title>Re:Can I get some wafers with that Wine?</title>
	<author>spazdor</author>
	<datestamp>1270055760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><a href="http://en.wikipedia.org/wiki/Alonzo\_Church" title="wikipedia.org">http://en.wikipedia.org/wiki/Alonzo\_Church</a> [wikipedia.org]</p></htmltext>
<tokenext>http : //en.wikipedia.org/wiki/Alonzo \ _Church [ wikipedia.org ]</tokentext>
<sentencetext>http://en.wikipedia.org/wiki/Alonzo\_Church [wikipedia.org]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688868</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690518</id>
	<title>Re:Interesting Idea</title>
	<author>istartedi</author>
	<datestamp>1270062180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Where did you live that you can recall
"the first time I saw an airplane"?  I don't recall
that.  I grew up in the Washington DC suburbs.  They were always there.  They always flew.
The big thing for me was learning not to be afraid
of the loud noise when they flew too close.</p><p>So based on my acquired ruleset, "planes fly" is just axiomatic.</p></htmltext>
<tokenext>Where did you live that you can recall " the first time I saw an airplane " ?
I do n't recall that .
I grew up in the Washington DC suburbs .
They were always there .
They always flew .
The big thing for me was learning not to be afraid of the loud noise when they flew too close.So based on my acquired ruleset , " planes fly " is just axiomatic .</tokentext>
<sentencetext>Where did you live that you can recall
"the first time I saw an airplane"?
I don't recall
that.
I grew up in the Washington DC suburbs.
They were always there.
They always flew.
The big thing for me was learning not to be afraid
of the loud noise when they flew too close.So based on my acquired ruleset, "planes fly" is just axiomatic.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689236</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690190</id>
	<title>Ok, so...</title>
	<author>jd</author>
	<datestamp>1270060740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>...it's an inference engine with fuzzy logic rather than discrete logic, such that if you represented the inference training set in an N-ary tree, the fuzzy value is proportional to the fraction of branches in the tree that match a given inference. (They'd be better off with an S-curve, as that seems to be a better model for modeling real-world situations than a linear system.)</p></htmltext>
<tokenext>...it 's an inference engine with fuzzy logic rather than discrete logic , such that if you represented the inference training set in an N-ary tree , the fuzzy value is proportional to the fraction of branches in the tree that match a given inference .
( They 'd be better off with an S-curve , as that seems to be a better model for modeling real-world situations than a linear system .
)</tokentext>
<sentencetext>...it's an inference engine with fuzzy logic rather than discrete logic, such that if you represented the inference training set in an N-ary tree, the fuzzy value is proportional to the fraction of branches in the tree that match a given inference.
(They'd be better off with an S-curve, as that seems to be a better model for modeling real-world situations than a linear system.
)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688888</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689860</id>
	<title>Interesting timing</title>
	<author>BlueBoxSW.com</author>
	<datestamp>1270059420000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>I've always enjoyed reading about AI, and like many here have done some experiments on my own time.</p><p>This week I've been looking for a simple state modeling language, for use in fairly simple processes, that would tie into some AI.</p><p>I wasn't really that impressed with anything I found, so when I saw the headline, I jumped to read the article.</p><p>Unfortunately, this is a step in the right direction, but not all that clear to write or maintain, and probably too complex for what I need to do.</p><p>The cleanest model to do these types of things I've found is the 1896 edition of Lewis Caroll's Symbolic Logic. (Yes, the same Lewis Caroll that wrote Alice in Wonderland).</p></htmltext>
<tokenext>I 've always enjoyed reading about AI , and like many here have done some experiments on my own time.This week I 've been looking for a simple state modeling language , for use in fairly simple processes , that would tie into some AI.I was n't really that impressed with anything I found , so when I saw the headline , I jumped to read the article.Unfortunately , this is a step in the right direction , but not all that clear to write or maintain , and probably too complex for what I need to do.The cleanest model to do these types of things I 've found is the 1896 edition of Lewis Caroll 's Symbolic Logic .
( Yes , the same Lewis Caroll that wrote Alice in Wonderland ) .</tokentext>
<sentencetext>I've always enjoyed reading about AI, and like many here have done some experiments on my own time.This week I've been looking for a simple state modeling language, for use in fairly simple processes, that would tie into some AI.I wasn't really that impressed with anything I found, so when I saw the headline, I jumped to read the article.Unfortunately, this is a step in the right direction, but not all that clear to write or maintain, and probably too complex for what I need to do.The cleanest model to do these types of things I've found is the 1896 edition of Lewis Caroll's Symbolic Logic.
(Yes, the same Lewis Caroll that wrote Alice in Wonderland).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689612</id>
	<title>Several hours too early for an April Fool</title>
	<author>Forget4it</author>
	<datestamp>1270058280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Several hours too early for an April Fool.</htmltext>
<tokenext>Several hours too early for an April Fool .</tokentext>
<sentencetext>Several hours too early for an April Fool.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31694092</id>
	<title>Re:Can I get some wafers with that Wine?</title>
	<author>HTH NE1</author>
	<datestamp>1270033380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You should have made a reference to Leonard Church instead.</p></htmltext>
<tokenext>You should have made a reference to Leonard Church instead .</tokentext>
<sentencetext>You should have made a reference to Leonard Church instead.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688868</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689674</id>
	<title>Re:Interesting Idea</title>
	<author>hrimhari</author>
	<datestamp>1270058520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>I have learned today that putting 'grand' and 'unified' at the title of an idea in science is very powerful for marketing.</p></div><p>You haven't been around<nobr> <wbr></nobr>/. much lately then...</p></div>
	</htmltext>
<tokenext>I have learned today that putting 'grand ' and 'unified ' at the title of an idea in science is very powerful for marketing.You have n't been around / .
much lately then.. .</tokentext>
<sentencetext>I have learned today that putting 'grand' and 'unified' at the title of an idea in science is very powerful for marketing.You haven't been around /.
much lately then...
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31696458</id>
	<title>Re:The real summary</title>
	<author>Anonymous</author>
	<datestamp>1270048320000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>7. Profit</p></div><p>There, fixed that for you.</p></div>
	</htmltext>
<tokenext>7 .
ProfitThere , fixed that for you .</tokentext>
<sentencetext>7.
ProfitThere, fixed that for you.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688888</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31698332</id>
	<title>Re:Interesting Idea</title>
	<author>Anonymous</author>
	<datestamp>1270113540000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>That sounds like witchery to me..</p></htmltext>
<tokenext>That sounds like witchery to me. .</tokentext>
<sentencetext>That sounds like witchery to me..</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690384</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690866</id>
	<title>Actual Paper</title>
	<author>gimballock</author>
	<datestamp>1270063440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>For actual details check TFA: <a href="http://web.mit.edu/droy/www/papers/GooManRoyBonTenUAI2008.pdf" title="mit.edu" rel="nofollow">http://web.mit.edu/droy/www/papers/GooManRoyBonTenUAI2008.pdf</a> [mit.edu]

<a href="http://projects.csail.mit.edu/church/wiki/Papers" title="mit.edu" rel="nofollow">http://projects.csail.mit.edu/church/wiki/Papers</a> [mit.edu]</htmltext>
<tokenext>For actual details check TFA : http : //web.mit.edu/droy/www/papers/GooManRoyBonTenUAI2008.pdf [ mit.edu ] http : //projects.csail.mit.edu/church/wiki/Papers [ mit.edu ]</tokentext>
<sentencetext>For actual details check TFA: http://web.mit.edu/droy/www/papers/GooManRoyBonTenUAI2008.pdf [mit.edu]

http://projects.csail.mit.edu/church/wiki/Papers [mit.edu]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689440</id>
	<title>Hype==More Funding?</title>
	<author>aaaaaaargh!</author>
	<datestamp>1270057620000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>Wow, as someone working in this domain I can say that this article is full of bold conjectures and shameless self-advertising. For a start, (1) uncertain reasoning and expert systems using it is hardly new. This is a well-established research domain and certainly not the golden grail of AI. Because, (2) all this probabilistic reasoning is nice and fine in small toy domains, but it quickly become computationally intractable in larger domains, particularly when complete independence of the random variables cannot be assured. And for this reason, (3) albeit being a useful tool and important research area, probabilistic reasoning and uncertain inference is definitely not the basis of human reasoning. The way we draw inference is <i>much more heuristic</i>, because we are so heavily resource-bound, and there are tons of other reasons why probabilistic inference is not cognitively adequate. (One of them, for example, is that untrained humans are incapable of making even the simplest calculations in probability theory correctly, because it is <i>harder than it might seem at first glance</i>.) Finally, (5) there are numerous open issues with all sorts of uncertain inference, ranging from certain impossibility results, over different choices that all seem to be rational somehow (e.g. DS-belief vs. ranking functions vs. probability vs. plausibility measures and how they are intereconnected with each other, alternative decision theories, different rules of dealing with conflicting evidence, etc.) to philosophical justifications of probability (e.g. frequentism vs. Bayesianism vs. propensity theory and their quirks, justification of inverse inference, etc).</p><p><b>In a nutshell, there is nothing wrong with this research in general or the Church programming language, but it is hardly a breakthrough in AI.</b></p></htmltext>
<tokenext>Wow , as someone working in this domain I can say that this article is full of bold conjectures and shameless self-advertising .
For a start , ( 1 ) uncertain reasoning and expert systems using it is hardly new .
This is a well-established research domain and certainly not the golden grail of AI .
Because , ( 2 ) all this probabilistic reasoning is nice and fine in small toy domains , but it quickly become computationally intractable in larger domains , particularly when complete independence of the random variables can not be assured .
And for this reason , ( 3 ) albeit being a useful tool and important research area , probabilistic reasoning and uncertain inference is definitely not the basis of human reasoning .
The way we draw inference is much more heuristic , because we are so heavily resource-bound , and there are tons of other reasons why probabilistic inference is not cognitively adequate .
( One of them , for example , is that untrained humans are incapable of making even the simplest calculations in probability theory correctly , because it is harder than it might seem at first glance .
) Finally , ( 5 ) there are numerous open issues with all sorts of uncertain inference , ranging from certain impossibility results , over different choices that all seem to be rational somehow ( e.g .
DS-belief vs. ranking functions vs. probability vs. plausibility measures and how they are intereconnected with each other , alternative decision theories , different rules of dealing with conflicting evidence , etc .
) to philosophical justifications of probability ( e.g .
frequentism vs. Bayesianism vs. propensity theory and their quirks , justification of inverse inference , etc ) .In a nutshell , there is nothing wrong with this research in general or the Church programming language , but it is hardly a breakthrough in AI .</tokentext>
<sentencetext>Wow, as someone working in this domain I can say that this article is full of bold conjectures and shameless self-advertising.
For a start, (1) uncertain reasoning and expert systems using it is hardly new.
This is a well-established research domain and certainly not the golden grail of AI.
Because, (2) all this probabilistic reasoning is nice and fine in small toy domains, but it quickly become computationally intractable in larger domains, particularly when complete independence of the random variables cannot be assured.
And for this reason, (3) albeit being a useful tool and important research area, probabilistic reasoning and uncertain inference is definitely not the basis of human reasoning.
The way we draw inference is much more heuristic, because we are so heavily resource-bound, and there are tons of other reasons why probabilistic inference is not cognitively adequate.
(One of them, for example, is that untrained humans are incapable of making even the simplest calculations in probability theory correctly, because it is harder than it might seem at first glance.
) Finally, (5) there are numerous open issues with all sorts of uncertain inference, ranging from certain impossibility results, over different choices that all seem to be rational somehow (e.g.
DS-belief vs. ranking functions vs. probability vs. plausibility measures and how they are intereconnected with each other, alternative decision theories, different rules of dealing with conflicting evidence, etc.
) to philosophical justifications of probability (e.g.
frequentism vs. Bayesianism vs. propensity theory and their quirks, justification of inverse inference, etc).In a nutshell, there is nothing wrong with this research in general or the Church programming language, but it is hardly a breakthrough in AI.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689454</id>
	<title>Grand Unified Theory?</title>
	<author>Anonymous</author>
	<datestamp>1270057680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>This is not a Grand Unified Theory of anything. Grandiose is the word that comes to mind.</p></htmltext>
<tokenext>This is not a Grand Unified Theory of anything .
Grandiose is the word that comes to mind .</tokentext>
<sentencetext>This is not a Grand Unified Theory of anything.
Grandiose is the word that comes to mind.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689408</id>
	<title>Re:New input for the system</title>
	<author>dkleinsc</author>
	<datestamp>1270057440000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext><p>How about "Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo."</p></htmltext>
<tokenext>How about " Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo .
"</tokentext>
<sentencetext>How about "Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo.
"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688902</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31691172</id>
	<title>Re:New input for the system</title>
	<author>Anonymous</author>
	<datestamp>1270064640000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>City animal City animal animal animal City animal</p></htmltext>
<tokenext>City animal City animal animal animal City animal</tokentext>
<sentencetext>City animal City animal animal animal City animal</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689408</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689430</id>
	<title>"john is a wolf with the ladies"</title>
	<author>just fiddling around</author>
	<datestamp>1270057620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>See, that's not an AI problem, that's a semantics problem.   The fact that you can mislead an AI by feeding it ambiguous inputs does not detract from it's capacity to solve problems.</p><p>A perfect AI does not need to be omniscient, it needs to solve a problem correctly considering what it knows.</p></htmltext>
<tokenext>See , that 's not an AI problem , that 's a semantics problem .
The fact that you can mislead an AI by feeding it ambiguous inputs does not detract from it 's capacity to solve problems.A perfect AI does not need to be omniscient , it needs to solve a problem correctly considering what it knows .</tokentext>
<sentencetext>See, that's not an AI problem, that's a semantics problem.
The fact that you can mislead an AI by feeding it ambiguous inputs does not detract from it's capacity to solve problems.A perfect AI does not need to be omniscient, it needs to solve a problem correctly considering what it knows.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688924</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689476</id>
	<title>Re:The real summary</title>
	<author>Anonymous</author>
	<datestamp>1270057740000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I was thinking about Skynet. Basically we discussing yesterday about the US military having more than three times the number of unmanned aircraft than regular manned ones.<br>So, some genius must realize in the next couple of years that why let some poor spooter risk his life to point where the unmanned aircraft must drop its bombs if you can have some sort of AI doing that.<br>Give like 5 years before the whole US defense system is controlled by some sort of might AI. Then as you said Skynet comes in.</p></htmltext>
<tokenext>I was thinking about Skynet .
Basically we discussing yesterday about the US military having more than three times the number of unmanned aircraft than regular manned ones.So , some genius must realize in the next couple of years that why let some poor spooter risk his life to point where the unmanned aircraft must drop its bombs if you can have some sort of AI doing that.Give like 5 years before the whole US defense system is controlled by some sort of might AI .
Then as you said Skynet comes in .</tokentext>
<sentencetext>I was thinking about Skynet.
Basically we discussing yesterday about the US military having more than three times the number of unmanned aircraft than regular manned ones.So, some genius must realize in the next couple of years that why let some poor spooter risk his life to point where the unmanned aircraft must drop its bombs if you can have some sort of AI doing that.Give like 5 years before the whole US defense system is controlled by some sort of might AI.
Then as you said Skynet comes in.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688888</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689534</id>
	<title>It's only a Scheme lib</title>
	<author>kikito</author>
	<datestamp>1270057980000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>This is just a library for Scheme. It does the same things that have been done before. In scheme.</p><p>Move along.</p></htmltext>
<tokenext>This is just a library for Scheme .
It does the same things that have been done before .
In scheme.Move along .</tokentext>
<sentencetext>This is just a library for Scheme.
It does the same things that have been done before.
In scheme.Move along.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31695374</id>
	<title>Re:Basically...</title>
	<author>Anonymous</author>
	<datestamp>1270039860000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>There was a bit of recent philosophy although I have a very vague recollection of it. General idea behind it was that thing that can neither be proved or disproved is irrelevant. This ended on an "oops" when people realized that statement makes itself irrelevant. It is something to think about, which coincidentally points back to an old approach. After all if you don't try to solve a problem you never will weather it's can or can not be solved. Thinking about it can be useful either way since "woods are the only place I can see a clear path" which makes a Chinese Room somewhat of a brain fuck example worth pursuing.<br>
&nbsp;</p></htmltext>
<tokenext>There was a bit of recent philosophy although I have a very vague recollection of it .
General idea behind it was that thing that can neither be proved or disproved is irrelevant .
This ended on an " oops " when people realized that statement makes itself irrelevant .
It is something to think about , which coincidentally points back to an old approach .
After all if you do n't try to solve a problem you never will weather it 's can or can not be solved .
Thinking about it can be useful either way since " woods are the only place I can see a clear path " which makes a Chinese Room somewhat of a brain fuck example worth pursuing .
 </tokentext>
<sentencetext>There was a bit of recent philosophy although I have a very vague recollection of it.
General idea behind it was that thing that can neither be proved or disproved is irrelevant.
This ended on an "oops" when people realized that statement makes itself irrelevant.
It is something to think about, which coincidentally points back to an old approach.
After all if you don't try to solve a problem you never will weather it's can or can not be solved.
Thinking about it can be useful either way since "woods are the only place I can see a clear path" which makes a Chinese Room somewhat of a brain fuck example worth pursuing.
 </sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31691216</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689960</id>
	<title>Re:Endless vs. infinite</title>
	<author>jd</author>
	<datestamp>1270059900000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>There are different sizes of infinity, and therefore it is entirely possible for an infinite task to grow into a larger infinite task.</p></htmltext>
<tokenext>There are different sizes of infinity , and therefore it is entirely possible for an infinite task to grow into a larger infinite task .</tokentext>
<sentencetext>There are different sizes of infinity, and therefore it is entirely possible for an infinite task to grow into a larger infinite task.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688930</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689804</id>
	<title>Re:Interesting Idea</title>
	<author>Anonymous</author>
	<datestamp>1270059180000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext><p><div class="quote"><p>The first time I saw an airplane, I didn't think the damn thing could fly.</p></div><p>The first time I saw an airplane, I was just a kid. Physics and  aerodynamics didn't mean much to me, so airplanes flying wasn't that much of a stretch of the imagination.</p><p>I didn't develop the "airplanes can't fly" concept until I'd worked for Boeing for a few years.</p></div>
	</htmltext>
<tokenext>The first time I saw an airplane , I did n't think the damn thing could fly.The first time I saw an airplane , I was just a kid .
Physics and aerodynamics did n't mean much to me , so airplanes flying was n't that much of a stretch of the imagination.I did n't develop the " airplanes ca n't fly " concept until I 'd worked for Boeing for a few years .</tokentext>
<sentencetext>The first time I saw an airplane, I didn't think the damn thing could fly.The first time I saw an airplane, I was just a kid.
Physics and  aerodynamics didn't mean much to me, so airplanes flying wasn't that much of a stretch of the imagination.I didn't develop the "airplanes can't fly" concept until I'd worked for Boeing for a few years.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689236</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689050</id>
	<title>Re:Interesting Idea</title>
	<author>Anonymous</author>
	<datestamp>1270055880000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>"...these things are always very poorly optimized when they&rsquo;ve just been built."</p></div><p> <a href="http://xkcd.com/720/" title="xkcd.com">XKCD #720</a> [xkcd.com]</p></div>
	</htmltext>
<tokenext>" ...these things are always very poorly optimized when they    ve just been built .
" XKCD # 720 [ xkcd.com ]</tokentext>
<sentencetext>"...these things are always very poorly optimized when they’ve just been built.
" XKCD #720 [xkcd.com]
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689826</id>
	<title>Re:The real summary</title>
	<author>phantomfive</author>
	<datestamp>1270059300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>You kind of have to feel sorry <a href="http://www.mit.edu/~ndg/" title="mit.edu">for the guy</a> [mit.edu], he just got a new job as a researcher studying human cognition, so he's under pressure to come up with <i>something</i>.  Secondly it's not an easy field to come up with something.  It's not like nutritional science where you can easily design an experiment to measure the effects of high-vitamin-C diets, and suddenly you have a publishable paper, even if the result is "absolutely no effect."  To add to the difficulty, <a href="http://cocosci.mit.edu/" title="mit.edu">group he is with</a> [mit.edu] seems to think that Bayesian statistics are the <i>key</i> to human cognition.  So he has to do something based on bayesian statistics. Despite the fact that bayesian statistics have been used in AI research since the 50s, by smart people, who've taken all the easy ideas.<br> <br>
Given all the pressures he was under, and the requirements he had to meet, I'd say he's actually did a pretty good job.  Not particularly useful from a practical standpoint, but it will allow him to get another research grant for next year (wow, I sound cynical).</htmltext>
<tokenext>You kind of have to feel sorry for the guy [ mit.edu ] , he just got a new job as a researcher studying human cognition , so he 's under pressure to come up with something .
Secondly it 's not an easy field to come up with something .
It 's not like nutritional science where you can easily design an experiment to measure the effects of high-vitamin-C diets , and suddenly you have a publishable paper , even if the result is " absolutely no effect .
" To add to the difficulty , group he is with [ mit.edu ] seems to think that Bayesian statistics are the key to human cognition .
So he has to do something based on bayesian statistics .
Despite the fact that bayesian statistics have been used in AI research since the 50s , by smart people , who 've taken all the easy ideas .
Given all the pressures he was under , and the requirements he had to meet , I 'd say he 's actually did a pretty good job .
Not particularly useful from a practical standpoint , but it will allow him to get another research grant for next year ( wow , I sound cynical ) .</tokentext>
<sentencetext>You kind of have to feel sorry for the guy [mit.edu], he just got a new job as a researcher studying human cognition, so he's under pressure to come up with something.
Secondly it's not an easy field to come up with something.
It's not like nutritional science where you can easily design an experiment to measure the effects of high-vitamin-C diets, and suddenly you have a publishable paper, even if the result is "absolutely no effect.
"  To add to the difficulty, group he is with [mit.edu] seems to think that Bayesian statistics are the key to human cognition.
So he has to do something based on bayesian statistics.
Despite the fact that bayesian statistics have been used in AI research since the 50s, by smart people, who've taken all the easy ideas.
Given all the pressures he was under, and the requirements he had to meet, I'd say he's actually did a pretty good job.
Not particularly useful from a practical standpoint, but it will allow him to get another research grant for next year (wow, I sound cynical).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689126</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31691982</id>
	<title>Re:MIT needs to get their PR department under cont</title>
	<author>Anonymous</author>
	<datestamp>1270067880000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The Slashdot admiration engine is all of the PR that any of these schools need.  You'd hardly think that any meaningful AI research is done outside of these big 3.</p></htmltext>
<tokenext>The Slashdot admiration engine is all of the PR that any of these schools need .
You 'd hardly think that any meaningful AI research is done outside of these big 3 .</tokentext>
<sentencetext>The Slashdot admiration engine is all of the PR that any of these schools need.
You'd hardly think that any meaningful AI research is done outside of these big 3.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689786</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689696</id>
	<title>Church is just a PL</title>
	<author>exa</author>
	<datestamp>1270058640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It is indeed one component of such an AGI, but it hardly qualifies as a "grand theory" of AI.</p><p>I think people at MIT are kind of jealous of AGI theorists, looking at the way they assert their claims of a "unified theory", as if they invented something wholly new and wonderful while making their uber-theoretical brains work on this grand problem that noone else ever thought about.</p><p>That is, after decades of dabbling with all sorts of nonsense like those stupid "gesture making" robots and whatnot, they come to realize that probabilistic inference is the key *now*? Like 50 years late?</p><p>And they needed the cognitive science department to figure that out? Is it because the AI lab is still infested by behaviorists?</p><p>Why didn't they just ask the theorists or make a survey of mathematical AI theories that have been in existence for several decades?????</p><p>Is it really surprising that a general purpose AI needs a) probabilistic inference b) a universal computer with probabilistic primitives?</p><p>In fact, those turn out to be \_some\_ of the axioms of a general purpose AI, discovered by Ray Solomonoff in the second half of 20th century.</p><p>I am laughing now.</p></htmltext>
<tokenext>It is indeed one component of such an AGI , but it hardly qualifies as a " grand theory " of AI.I think people at MIT are kind of jealous of AGI theorists , looking at the way they assert their claims of a " unified theory " , as if they invented something wholly new and wonderful while making their uber-theoretical brains work on this grand problem that noone else ever thought about.That is , after decades of dabbling with all sorts of nonsense like those stupid " gesture making " robots and whatnot , they come to realize that probabilistic inference is the key * now * ?
Like 50 years late ? And they needed the cognitive science department to figure that out ?
Is it because the AI lab is still infested by behaviorists ? Why did n't they just ask the theorists or make a survey of mathematical AI theories that have been in existence for several decades ? ? ? ?
? Is it really surprising that a general purpose AI needs a ) probabilistic inference b ) a universal computer with probabilistic primitives ? In fact , those turn out to be \ _some \ _ of the axioms of a general purpose AI , discovered by Ray Solomonoff in the second half of 20th century.I am laughing now .</tokentext>
<sentencetext>It is indeed one component of such an AGI, but it hardly qualifies as a "grand theory" of AI.I think people at MIT are kind of jealous of AGI theorists, looking at the way they assert their claims of a "unified theory", as if they invented something wholly new and wonderful while making their uber-theoretical brains work on this grand problem that noone else ever thought about.That is, after decades of dabbling with all sorts of nonsense like those stupid "gesture making" robots and whatnot, they come to realize that probabilistic inference is the key *now*?
Like 50 years late?And they needed the cognitive science department to figure that out?
Is it because the AI lab is still infested by behaviorists?Why didn't they just ask the theorists or make a survey of mathematical AI theories that have been in existence for several decades????
?Is it really surprising that a general purpose AI needs a) probabilistic inference b) a universal computer with probabilistic primitives?In fact, those turn out to be \_some\_ of the axioms of a general purpose AI, discovered by Ray Solomonoff in the second half of 20th century.I am laughing now.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689714</id>
	<title>Look at today's date</title>
	<author>Anonymous</author>
	<datestamp>1270058700000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Tomorrow is the 1st of April folks.</p></htmltext>
<tokenext>Tomorrow is the 1st of April folks .</tokentext>
<sentencetext>Tomorrow is the 1st of April folks.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689396</id>
	<title>is it me</title>
	<author>thisisnotreal</author>
	<datestamp>1270057440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>or are the comments ridiculously funny today.
thanks for the fish, indeed.</htmltext>
<tokenext>or are the comments ridiculously funny today .
thanks for the fish , indeed .</tokentext>
<sentencetext>or are the comments ridiculously funny today.
thanks for the fish, indeed.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688978</id>
	<title>Prior Art By</title>
	<author>Anonymous</author>
	<datestamp>1270055520000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>A. A. Aaby: <a href="http://74.125.155.132/scholar?q=cache:0I4EMmOT1S0J:scholar.google.com/+cyc+aaby&amp;hl=en&amp;as\_sdt=4000000" title="74.125.155.132" rel="nofollow">It's all about metaphor</a> [74.125.155.132]</p><p>Yours In Perm,<br>K. Trout</p></htmltext>
<tokenext>A. A. Aaby : It 's all about metaphor [ 74.125.155.132 ] Yours In Perm,K .
Trout</tokentext>
<sentencetext>A. A. Aaby: It's all about metaphor [74.125.155.132]Yours In Perm,K.
Trout</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882</id>
	<title>Interesting Idea</title>
	<author>Anonymous</author>
	<datestamp>1270055160000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext>But <a href="http://docs.google.com/viewer\%3Fa\%3Dv\%26q\%3Dcache:rvsEGwxqI4wJ:www.mit.edu/~ndg/papers/churchUAI08\_rev2.pdf\%2Bchurch\%2Blanguage\%26hl\%3Den\%26gl\%3Dus\%26pid\%3Dbl\%26srcid\%3DADGEESgflkH\_CGKgkw59dt\_Ir\_iWxCbgCcK\_mLkl7ctpoEW\_R4I93vurMax7kTfS8Z\_zlZmTV1VdVO7FT2rC8fqSkJvaSVNO4UMd4hlnYtGsKWdtkwuEwLH\_UGCmG80WwgbiRwK0Q6SF\%26sig\%3DAHIEtbSnIVPwsWQxqFeLHtrydSd50dCPLQ&amp;ei=iXGzS8z\_G4KglAef6\_G5BA&amp;sa=X&amp;oi=gview&amp;resnum=2&amp;ct=other&amp;ved=0CA4QxQEwAQ&amp;usg=AFQjCNGYIOn9aaFTn9LX4AfPnnccNaKYig" title="google.com">from 2008</a> [google.com].  In addition to that, it faces some similar problems to the other two models.  Their example:<p><div class="quote"><p>Told that the cassowary is a bird, a program written in Church might conclude that cassowaries can probably fly. But if the program was then told that cassowaries can weigh almost 200 pounds, it might revise its initial probability estimate, concluding that, actually, cassowaries probably can&rsquo;t fly.</p></div><p>But you just induced a bunch of rules I didn't know were in your system.  That things over 200 lbs are unlikely to fly.  But wait, 747s are heavier than that.  Oh, we need to know that <i>animals</i> over 200 lbs rarely have the ability of flight.  Unless the cassowary is an extinct dinosaur in which case there might have been one<nobr> <wbr></nobr>... again, creativity and human analysis present quite the barrier to AI.</p><p><div class="quote"><p>Chater cautions that, while Church programs perform well on such targeted tasks, they&rsquo;re currently too computationally intensive to serve as general-purpose mind simulators. &ldquo;It&rsquo;s a serious issue if you&rsquo;re going to wheel it out to solve every problem under the sun,&rdquo; Chater says. &ldquo;But it&rsquo;s just been built, and these things are always very poorly optimized when they&rsquo;ve just been built.&rdquo; And Chater emphasizes that getting the system to work at all is an achievement in itself: &ldquo;It&rsquo;s the kind of thing that somebody might produce as a theoretical suggestion, and you&rsquo;d think, &lsquo;Wow, that&rsquo;s fantastically clever, but I&rsquo;m sure you&rsquo;ll never make it run, really.&rsquo; And the miracle is that it does run, and it works.&rdquo;</p></div><p>That sounds familiar<nobr> <wbr></nobr>... in both the rule based and probabilistic based AI, they say that you need a large rule corpus or many probabilities accurately computed ahead of time to make the system work.  Problem is that you never scratch the surface of a human mind's lifetime experience though.  And Chater's method, I suspect, is similarly stunted.  <br> <br>

I have learned today that putting 'grand' and 'unified' at the title of an idea in science is very powerful for marketing.</p></div>
	</htmltext>
<tokenext>But from 2008 [ google.com ] .
In addition to that , it faces some similar problems to the other two models .
Their example : Told that the cassowary is a bird , a program written in Church might conclude that cassowaries can probably fly .
But if the program was then told that cassowaries can weigh almost 200 pounds , it might revise its initial probability estimate , concluding that , actually , cassowaries probably can    t fly.But you just induced a bunch of rules I did n't know were in your system .
That things over 200 lbs are unlikely to fly .
But wait , 747s are heavier than that .
Oh , we need to know that animals over 200 lbs rarely have the ability of flight .
Unless the cassowary is an extinct dinosaur in which case there might have been one ... again , creativity and human analysis present quite the barrier to AI.Chater cautions that , while Church programs perform well on such targeted tasks , they    re currently too computationally intensive to serve as general-purpose mind simulators .
   It    s a serious issue if you    re going to wheel it out to solve every problem under the sun ,    Chater says .
   But it    s just been built , and these things are always very poorly optimized when they    ve just been built.    And Chater emphasizes that getting the system to work at all is an achievement in itself :    It    s the kind of thing that somebody might produce as a theoretical suggestion , and you    d think ,    Wow , that    s fantastically clever , but I    m sure you    ll never make it run , really.    And the miracle is that it does run , and it works.    That sounds familiar ... in both the rule based and probabilistic based AI , they say that you need a large rule corpus or many probabilities accurately computed ahead of time to make the system work .
Problem is that you never scratch the surface of a human mind 's lifetime experience though .
And Chater 's method , I suspect , is similarly stunted .
I have learned today that putting 'grand ' and 'unified ' at the title of an idea in science is very powerful for marketing .</tokentext>
<sentencetext>But from 2008 [google.com].
In addition to that, it faces some similar problems to the other two models.
Their example:Told that the cassowary is a bird, a program written in Church might conclude that cassowaries can probably fly.
But if the program was then told that cassowaries can weigh almost 200 pounds, it might revise its initial probability estimate, concluding that, actually, cassowaries probably can’t fly.But you just induced a bunch of rules I didn't know were in your system.
That things over 200 lbs are unlikely to fly.
But wait, 747s are heavier than that.
Oh, we need to know that animals over 200 lbs rarely have the ability of flight.
Unless the cassowary is an extinct dinosaur in which case there might have been one ... again, creativity and human analysis present quite the barrier to AI.Chater cautions that, while Church programs perform well on such targeted tasks, they’re currently too computationally intensive to serve as general-purpose mind simulators.
“It’s a serious issue if you’re going to wheel it out to solve every problem under the sun,” Chater says.
“But it’s just been built, and these things are always very poorly optimized when they’ve just been built.” And Chater emphasizes that getting the system to work at all is an achievement in itself: “It’s the kind of thing that somebody might produce as a theoretical suggestion, and you’d think, ‘Wow, that’s fantastically clever, but I’m sure you’ll never make it run, really.’ And the miracle is that it does run, and it works.”That sounds familiar ... in both the rule based and probabilistic based AI, they say that you need a large rule corpus or many probabilities accurately computed ahead of time to make the system work.
Problem is that you never scratch the surface of a human mind's lifetime experience though.
And Chater's method, I suspect, is similarly stunted.
I have learned today that putting 'grand' and 'unified' at the title of an idea in science is very powerful for marketing.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688818</id>
	<title>That is very interesting</title>
	<author>BadAnalogyGuy</author>
	<datestamp>1270054920000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext><p>Tell me about you to build a cognitive model in a fantastically much more straightforward and transparent way than you could do before.</p></htmltext>
<tokenext>Tell me about you to build a cognitive model in a fantastically much more straightforward and transparent way than you could do before .</tokentext>
<sentencetext>Tell me about you to build a cognitive model in a fantastically much more straightforward and transparent way than you could do before.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689168</id>
	<title>Grand Unified Theory of AI?  Hardly.</title>
	<author>Anonymous</author>
	<datestamp>1270056360000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>The way the author wrote the article, it seems like nothing different from an expert system straight from the 70's, e.g. MYCIN.  That one also uses probabilities and rules; the only difference is that it diagnoses illnesses, but that can be extended to almost anything.</p><p>Probably the only contribution is a new language.  Which, I'm guessing, probably doesn't deviate much from, say, CLIPS (and at least THAT language is searchable in Google... I can't seem to find the correct search terms for Noah Goodman's language without getting photos of cathedrals, so I can't even say if I'm correct)</p><p>AI at this point has diverged so much from just probabilities and rules that it's not practical to "unify" it as the author claims.  Just look up AAAI and its many conferences and subconferences.  I just submitted a paper to an AI workshop... in a conference<nobr> <wbr></nobr>... in a GROUP of co-located conferences<nobr> <wbr></nobr>... that is recognized by AAAI as one specialization among many.  That's FOUR branches removed.</p></htmltext>
<tokenext>The way the author wrote the article , it seems like nothing different from an expert system straight from the 70 's , e.g .
MYCIN. That one also uses probabilities and rules ; the only difference is that it diagnoses illnesses , but that can be extended to almost anything.Probably the only contribution is a new language .
Which , I 'm guessing , probably does n't deviate much from , say , CLIPS ( and at least THAT language is searchable in Google... I ca n't seem to find the correct search terms for Noah Goodman 's language without getting photos of cathedrals , so I ca n't even say if I 'm correct ) AI at this point has diverged so much from just probabilities and rules that it 's not practical to " unify " it as the author claims .
Just look up AAAI and its many conferences and subconferences .
I just submitted a paper to an AI workshop... in a conference ... in a GROUP of co-located conferences ... that is recognized by AAAI as one specialization among many .
That 's FOUR branches removed .</tokentext>
<sentencetext>The way the author wrote the article, it seems like nothing different from an expert system straight from the 70's, e.g.
MYCIN.  That one also uses probabilities and rules; the only difference is that it diagnoses illnesses, but that can be extended to almost anything.Probably the only contribution is a new language.
Which, I'm guessing, probably doesn't deviate much from, say, CLIPS (and at least THAT language is searchable in Google... I can't seem to find the correct search terms for Noah Goodman's language without getting photos of cathedrals, so I can't even say if I'm correct)AI at this point has diverged so much from just probabilities and rules that it's not practical to "unify" it as the author claims.
Just look up AAAI and its many conferences and subconferences.
I just submitted a paper to an AI workshop... in a conference ... in a GROUP of co-located conferences ... that is recognized by AAAI as one specialization among many.
That's FOUR branches removed.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690106</id>
	<title>Re:That is very interesting</title>
	<author>Anonymous</author>
	<datestamp>1270060320000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Not so much about the article as it is about Einstein. Should I put up a grand unified theory of manure so we can associate Einstein's face with it?  I've got a personal relationship  Einstein. He is very dissapointed with me for not learning all i need to to have a discussion with him, your directing me wrong. This is not the way. Put up a comp science pioneer or something. How about John von Neumann?</p></htmltext>
<tokenext>Not so much about the article as it is about Einstein .
Should I put up a grand unified theory of manure so we can associate Einstein 's face with it ?
I 've got a personal relationship Einstein .
He is very dissapointed with me for not learning all i need to to have a discussion with him , your directing me wrong .
This is not the way .
Put up a comp science pioneer or something .
How about John von Neumann ?</tokentext>
<sentencetext>Not so much about the article as it is about Einstein.
Should I put up a grand unified theory of manure so we can associate Einstein's face with it?
I've got a personal relationship  Einstein.
He is very dissapointed with me for not learning all i need to to have a discussion with him, your directing me wrong.
This is not the way.
Put up a comp science pioneer or something.
How about John von Neumann?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688818</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689304</id>
	<title>Re:Terrible Summary</title>
	<author>Anonymous</author>
	<datestamp>1270057020000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>alla da hype. nunna da promise.</p><p>it's teh way of teh future!</p></htmltext>
<tokenext>alla da hype .
nunna da promise.it 's teh way of teh future !</tokentext>
<sentencetext>alla da hype.
nunna da promise.it's teh way of teh future!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688938</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690608</id>
	<title>A clear case for serif fonts</title>
	<author>HikingStick</author>
	<datestamp>1270062480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Whe I saw the headline, I thought it read "Grand Unified Theory of AL".  I thought someone finally understood the life and mind of "Wierd Al" Yankovic.</htmltext>
<tokenext>Whe I saw the headline , I thought it read " Grand Unified Theory of AL " .
I thought someone finally understood the life and mind of " Wierd Al " Yankovic .</tokentext>
<sentencetext>Whe I saw the headline, I thought it read "Grand Unified Theory of AL".
I thought someone finally understood the life and mind of "Wierd Al" Yankovic.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688902</id>
	<title>New input for the system</title>
	<author>Anonymous</author>
	<datestamp>1270055280000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><ol>
<li>1) New rule:  "Colorless green ideas sleep furiously."</li><li>2)<nobr> <wbr></nobr>...</li><li>3) Profit!</li></ol></htmltext>
<tokenext>1 ) New rule : " Colorless green ideas sleep furiously .
" 2 ) ...3 ) Profit !</tokentext>
<sentencetext>
1) New rule:  "Colorless green ideas sleep furiously.
"2) ...3) Profit!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31692968</id>
	<title>Re:Interesting Idea</title>
	<author>mikael</author>
	<datestamp>1270028100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You would need to add rules relating to aerodynamics and biology.</p><p>What is the cassowary's wing span and surface area? There is a relationship between volume, mass and surface area. And lifting power/stability is dependent on wing shape.</p><p>Which part of the world does it live? That gives a list of potential food sources, nesting and breeding grounds. Does it nest in treetops, mountains, rivers or just spend all time gliding?</p><p>If it is 200lbs, it is unlikely to eat insects as it's main diet, and would be limited by it's lifting power.</p><p>Is it the legendary thunderbird that lives in mountains, and only appears after thunderstorms, because only the thermals of those storms will provide it with enough lift?</p></htmltext>
<tokenext>You would need to add rules relating to aerodynamics and biology.What is the cassowary 's wing span and surface area ?
There is a relationship between volume , mass and surface area .
And lifting power/stability is dependent on wing shape.Which part of the world does it live ?
That gives a list of potential food sources , nesting and breeding grounds .
Does it nest in treetops , mountains , rivers or just spend all time gliding ? If it is 200lbs , it is unlikely to eat insects as it 's main diet , and would be limited by it 's lifting power.Is it the legendary thunderbird that lives in mountains , and only appears after thunderstorms , because only the thermals of those storms will provide it with enough lift ?</tokentext>
<sentencetext>You would need to add rules relating to aerodynamics and biology.What is the cassowary's wing span and surface area?
There is a relationship between volume, mass and surface area.
And lifting power/stability is dependent on wing shape.Which part of the world does it live?
That gives a list of potential food sources, nesting and breeding grounds.
Does it nest in treetops, mountains, rivers or just spend all time gliding?If it is 200lbs, it is unlikely to eat insects as it's main diet, and would be limited by it's lifting power.Is it the legendary thunderbird that lives in mountains, and only appears after thunderstorms, because only the thermals of those storms will provide it with enough lift?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688930</id>
	<title>Re:Endless vs. infinite</title>
	<author>zero\_out</author>
	<datestamp>1270055400000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>My understanding is that an endless task is finite at any point in time, but continues to grow for eternity.</p><p>An infinite task is one that, at any point in time, has no bounds.  An infinite task cannot "grow" since it would need a finite state to then become larger than it.</p></htmltext>
<tokenext>My understanding is that an endless task is finite at any point in time , but continues to grow for eternity.An infinite task is one that , at any point in time , has no bounds .
An infinite task can not " grow " since it would need a finite state to then become larger than it .</tokentext>
<sentencetext>My understanding is that an endless task is finite at any point in time, but continues to grow for eternity.An infinite task is one that, at any point in time, has no bounds.
An infinite task cannot "grow" since it would need a finite state to then become larger than it.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688842</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689474</id>
	<title>If we program the logic...</title>
	<author>fhuglegads</author>
	<datestamp>1270057740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>If we program the logic for the AI and the AI system predicts outcomes it's based on the algorithm that is used to make predictions.
<br> <br>
I cannot grasp how a computer can think of something that a human cannot because a computer only knows what we know. It is not capable of experience. As far as I can tell, the only thing AI can do is calculate something faster than humans can.
<br> <br>
If you have a robot that learns how to move around a building without crashing into objects that learns through the experience of bumping into them it's just processing and responding as it was told to do.
<br> <br>
Maybe I'm wrong. I'm not an AI expert but it all seems like a fancy way of saying, "I programmed a device to act how I wanted it to." All of the probabilistic data is analyzed by a person first. An AI device can only be as "intelligent" as it's creator.</htmltext>
<tokenext>If we program the logic for the AI and the AI system predicts outcomes it 's based on the algorithm that is used to make predictions .
I can not grasp how a computer can think of something that a human can not because a computer only knows what we know .
It is not capable of experience .
As far as I can tell , the only thing AI can do is calculate something faster than humans can .
If you have a robot that learns how to move around a building without crashing into objects that learns through the experience of bumping into them it 's just processing and responding as it was told to do .
Maybe I 'm wrong .
I 'm not an AI expert but it all seems like a fancy way of saying , " I programmed a device to act how I wanted it to .
" All of the probabilistic data is analyzed by a person first .
An AI device can only be as " intelligent " as it 's creator .</tokentext>
<sentencetext>If we program the logic for the AI and the AI system predicts outcomes it's based on the algorithm that is used to make predictions.
I cannot grasp how a computer can think of something that a human cannot because a computer only knows what we know.
It is not capable of experience.
As far as I can tell, the only thing AI can do is calculate something faster than humans can.
If you have a robot that learns how to move around a building without crashing into objects that learns through the experience of bumping into them it's just processing and responding as it was told to do.
Maybe I'm wrong.
I'm not an AI expert but it all seems like a fancy way of saying, "I programmed a device to act how I wanted it to.
" All of the probabilistic data is analyzed by a person first.
An AI device can only be as "intelligent" as it's creator.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31691956</id>
	<title>Re:Interesting Idea</title>
	<author>mcgrew</author>
	<datestamp>1270067760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>An AI doesn't mean it comes equipped with the sum of human knowledge; it means it simulates the human mind.</i></p><p>A simulation of something poorly understood is a poor simulation.</p></htmltext>
<tokenext>An AI does n't mean it comes equipped with the sum of human knowledge ; it means it simulates the human mind.A simulation of something poorly understood is a poor simulation .</tokentext>
<sentencetext>An AI doesn't mean it comes equipped with the sum of human knowledge; it means it simulates the human mind.A simulation of something poorly understood is a poor simulation.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689236</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690052</id>
	<title>I, for one, ...</title>
	<author>Anonymous</author>
	<datestamp>1270060200000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>... welcome our new Church overlords...</p><p>There! I said it. Now mod to oblivion etc etc...</p></htmltext>
<tokenext>... welcome our new Church overlords...There !
I said it .
Now mod to oblivion etc etc.. .</tokentext>
<sentencetext>... welcome our new Church overlords...There!
I said it.
Now mod to oblivion etc etc...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689008</id>
	<title>Re:Can I get some wafers with that Wine?</title>
	<author>Anonymous</author>
	<datestamp>1270055700000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>q.v. <a href="http://en.wikipedia.org/wiki/Alonzo\_Church" title="wikipedia.org" rel="nofollow">Alonzo Church</a> [wikipedia.org]</p></htmltext>
<tokenext>q.v .
Alonzo Church [ wikipedia.org ]</tokentext>
<sentencetext>q.v.
Alonzo Church [wikipedia.org]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688868</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690098</id>
	<title>Prior art?</title>
	<author>TiggertheMad</author>
	<datestamp>1270060320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I was going to comment that at a low level it looks to be similar to fuzzy logic in that it is using probability as thresholds for making decisions. If this is the case, then there isn't really anything groundbreaking about this model, since fuzzy logic has been around since the 1960s.</htmltext>
<tokenext>I was going to comment that at a low level it looks to be similar to fuzzy logic in that it is using probability as thresholds for making decisions .
If this is the case , then there is n't really anything groundbreaking about this model , since fuzzy logic has been around since the 1960s .</tokentext>
<sentencetext>I was going to comment that at a low level it looks to be similar to fuzzy logic in that it is using probability as thresholds for making decisions.
If this is the case, then there isn't really anything groundbreaking about this model, since fuzzy logic has been around since the 1960s.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31691538</id>
	<title>child learn with rules</title>
	<author>Anonymous</author>
	<datestamp>1270066140000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>I have a child.  When I watch her learn its totally rules based.  Also, very importantly when she is told that her existing rules don't quite describe reality she is quick to make a new exception (rule).  Since she's young her mind is flexible and she doesn't get angry when its necessary to make an exception.  The new rule stands until a new exception comes up.</p><p>eg in english she wrote "there toy" since she wasn't familiar with the other there's.  She was corrected to "their toy".  But of course, there is still "they're".</p></htmltext>
<tokenext>I have a child .
When I watch her learn its totally rules based .
Also , very importantly when she is told that her existing rules do n't quite describe reality she is quick to make a new exception ( rule ) .
Since she 's young her mind is flexible and she does n't get angry when its necessary to make an exception .
The new rule stands until a new exception comes up.eg in english she wrote " there toy " since she was n't familiar with the other there 's .
She was corrected to " their toy " .
But of course , there is still " they 're " .</tokentext>
<sentencetext>I have a child.
When I watch her learn its totally rules based.
Also, very importantly when she is told that her existing rules don't quite describe reality she is quick to make a new exception (rule).
Since she's young her mind is flexible and she doesn't get angry when its necessary to make an exception.
The new rule stands until a new exception comes up.eg in english she wrote "there toy" since she wasn't familiar with the other there's.
She was corrected to "their toy".
But of course, there is still "they're".</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690674</id>
	<title>Cyc -- inference and AI</title>
	<author>BenBoy</author>
	<datestamp>1270062720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Just adding the chorus of "nothing new here"; here's an excerpt from a 1999 <a href="http://weeklywire.com/ww/12-28-99/austin\_screens\_feature.html" title="weeklywire.com" rel="nofollow">interview</a> [weeklywire.com] with Cycorp CEO Doug Lenat:<blockquote><div><p>DL: We're already able to see isolated cases where Cyc is learning things on its own. Some of the things it learns reflect the incompleteness of its knowledge and are just funny. For example, <b>Cyc at one point concluded that everyone born before 1900 was famous, because all the people that it knew about and who lived in earlier times were famous people</b>. There are similar sorts of errors. But what we're seeing is not so much something that sits quietly on its own and makes discoveries but rather something that uses the knowledge it has to accelerate its own education.</p></div>
</blockquote></div>
	</htmltext>
<tokenext>Just adding the chorus of " nothing new here " ; here 's an excerpt from a 1999 interview [ weeklywire.com ] with Cycorp CEO Doug Lenat : DL : We 're already able to see isolated cases where Cyc is learning things on its own .
Some of the things it learns reflect the incompleteness of its knowledge and are just funny .
For example , Cyc at one point concluded that everyone born before 1900 was famous , because all the people that it knew about and who lived in earlier times were famous people .
There are similar sorts of errors .
But what we 're seeing is not so much something that sits quietly on its own and makes discoveries but rather something that uses the knowledge it has to accelerate its own education .</tokentext>
<sentencetext>Just adding the chorus of "nothing new here"; here's an excerpt from a 1999 interview [weeklywire.com] with Cycorp CEO Doug Lenat:DL: We're already able to see isolated cases where Cyc is learning things on its own.
Some of the things it learns reflect the incompleteness of its knowledge and are just funny.
For example, Cyc at one point concluded that everyone born before 1900 was famous, because all the people that it knew about and who lived in earlier times were famous people.
There are similar sorts of errors.
But what we're seeing is not so much something that sits quietly on its own and makes discoveries but rather something that uses the knowledge it has to accelerate its own education.

	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690150</id>
	<title>Re: Wallace Shawn in the Room</title>
	<author>Kyont</author>
	<datestamp>1270060500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>Given a film of two people talking a computer with decent AI would catagorize objects, identify people versus say a lamp, determine the people are engaged in action (versus a lamp just sitting there) making that relevant, hear the sound coming from the people then infer they are talking (making the link.) Then paralell the computer would filter out the chair, and various scenery in the thread now processing "CONVERSATION".</p></div></blockquote><p>This may be the most succinct review I've ever read of "My Dinner With Andre"!</p></div>
	</htmltext>
<tokenext>Given a film of two people talking a computer with decent AI would catagorize objects , identify people versus say a lamp , determine the people are engaged in action ( versus a lamp just sitting there ) making that relevant , hear the sound coming from the people then infer they are talking ( making the link .
) Then paralell the computer would filter out the chair , and various scenery in the thread now processing " CONVERSATION " .This may be the most succinct review I 've ever read of " My Dinner With Andre " !</tokentext>
<sentencetext>Given a film of two people talking a computer with decent AI would catagorize objects, identify people versus say a lamp, determine the people are engaged in action (versus a lamp just sitting there) making that relevant, hear the sound coming from the people then infer they are talking (making the link.
) Then paralell the computer would filter out the chair, and various scenery in the thread now processing "CONVERSATION".This may be the most succinct review I've ever read of "My Dinner With Andre"!
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689628</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31697220</id>
	<title>Re:Interesting timing</title>
	<author>Anonymous</author>
	<datestamp>1270055160000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Have you seen the Extensible Graphical Game Generator, PhD thesis by Jon Orwant?</p></htmltext>
<tokenext>Have you seen the Extensible Graphical Game Generator , PhD thesis by Jon Orwant ?</tokentext>
<sentencetext>Have you seen the Extensible Graphical Game Generator, PhD thesis by Jon Orwant?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689860</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689582</id>
	<title>isn't that the main plot point behind Caprica?</title>
	<author>Anonymous</author>
	<datestamp>1270058220000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>... You know the TV series that teaches us to not use 16 year old girls as the model for military robots.</p></htmltext>
<tokenext>... You know the TV series that teaches us to not use 16 year old girls as the model for military robots .</tokentext>
<sentencetext>... You know the TV series that teaches us to not use 16 year old girls as the model for military robots.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690984</id>
	<title>Time to move to decentralized torrents?</title>
	<author>acheron12</author>
	<datestamp>1270063920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Supposedly the software (<a href="http://tech.slashdot.org/tech/08/10/28/1722214.shtml" title="slashdot.org" rel="nofollow">Tribler</a> [slashdot.org]) is already available.</htmltext>
<tokenext>Supposedly the software ( Tribler [ slashdot.org ] ) is already available .</tokentext>
<sentencetext>Supposedly the software (Tribler [slashdot.org]) is already available.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689176</id>
	<title>Re:Grand unified Hyperbole of AI</title>
	<author>bluesatin</author>
	<datestamp>1270056420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Correct me if I'm wrong, but a child would presumably understand the wolf statement literally with hair and everything. Presumably as the list of rules grow (just as a child learns), the A.I.'s definition of what John is would change.</p><p>My question is, how do you expect to list all these rules when we can probably define hundreds of rules from a paragraph of information alone.</p><p>Would it also create a very racist A.I. that tends to use stereotypes to define everything?<br>Maybe until so many rules are learnt, it's very hard to statistically define anything, at least until more data is acquired about the object.</p></htmltext>
<tokenext>Correct me if I 'm wrong , but a child would presumably understand the wolf statement literally with hair and everything .
Presumably as the list of rules grow ( just as a child learns ) , the A.I .
's definition of what John is would change.My question is , how do you expect to list all these rules when we can probably define hundreds of rules from a paragraph of information alone.Would it also create a very racist A.I .
that tends to use stereotypes to define everything ? Maybe until so many rules are learnt , it 's very hard to statistically define anything , at least until more data is acquired about the object .</tokentext>
<sentencetext>Correct me if I'm wrong, but a child would presumably understand the wolf statement literally with hair and everything.
Presumably as the list of rules grow (just as a child learns), the A.I.
's definition of what John is would change.My question is, how do you expect to list all these rules when we can probably define hundreds of rules from a paragraph of information alone.Would it also create a very racist A.I.
that tends to use stereotypes to define everything?Maybe until so many rules are learnt, it's very hard to statistically define anything, at least until more data is acquired about the object.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688924</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31691340</id>
	<title>We don't need a grand unifying theory!</title>
	<author>rm999</author>
	<datestamp>1270065300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>AI, unlike in the pure sciences, has no "answer" and therefore cannot have a grand unifying theory. There will never be a single algorithm that works for every type of problem we want to solve. AI is an applied science.</p><p>Besides, this stuff barely counts as "AI" in the modern sense. MIT embarrasses itself by pushing out stories like this.</p></htmltext>
<tokenext>AI , unlike in the pure sciences , has no " answer " and therefore can not have a grand unifying theory .
There will never be a single algorithm that works for every type of problem we want to solve .
AI is an applied science.Besides , this stuff barely counts as " AI " in the modern sense .
MIT embarrasses itself by pushing out stories like this .</tokentext>
<sentencetext>AI, unlike in the pure sciences, has no "answer" and therefore cannot have a grand unifying theory.
There will never be a single algorithm that works for every type of problem we want to solve.
AI is an applied science.Besides, this stuff barely counts as "AI" in the modern sense.
MIT embarrasses itself by pushing out stories like this.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31694264</id>
	<title>Re:Endless vs. infinite</title>
	<author>Physics Dude</author>
	<datestamp>1270034220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>There are different sizes of infinity, and therefore it is entirely possible for an infinite task to grow into a larger infinite task.</p></div><p>I don't see how really.  The different types of infinities are different in their <b>basic nature</b>. For example, you can start with an infinite set with cardinality aleph0 such as the integers, and let that 'grow' all you like, taking that infinity multiplied by that infinity, raised to the power of that infinity, etc. and you'll still never get to the next 'larger infinity' with cardinality aleph1 such as that of the real numbers.  They're just a fundamentally different animal. It's the difference between discrete and non-discrete.</p></div>
	</htmltext>
<tokenext>There are different sizes of infinity , and therefore it is entirely possible for an infinite task to grow into a larger infinite task.I do n't see how really .
The different types of infinities are different in their basic nature .
For example , you can start with an infinite set with cardinality aleph0 such as the integers , and let that 'grow ' all you like , taking that infinity multiplied by that infinity , raised to the power of that infinity , etc .
and you 'll still never get to the next 'larger infinity ' with cardinality aleph1 such as that of the real numbers .
They 're just a fundamentally different animal .
It 's the difference between discrete and non-discrete .</tokentext>
<sentencetext>There are different sizes of infinity, and therefore it is entirely possible for an infinite task to grow into a larger infinite task.I don't see how really.
The different types of infinities are different in their basic nature.
For example, you can start with an infinite set with cardinality aleph0 such as the integers, and let that 'grow' all you like, taking that infinity multiplied by that infinity, raised to the power of that infinity, etc.
and you'll still never get to the next 'larger infinity' with cardinality aleph1 such as that of the real numbers.
They're just a fundamentally different animal.
It's the difference between discrete and non-discrete.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689960</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688976</id>
	<title>Re:Interesting Idea</title>
	<author>geekoid</author>
	<datestamp>1270055520000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>"That things over 200 lbs are unlikely to fly. But wait, 747s are heavier than that. Oh, we need to know that animals over 200 lbs rarely have the ability of flight</p><p>what? He specifically stated birds. Not Animals, or inanimate objects.</p><p>It looks like this system can  change as it is used, effectivly creating a 'lifetime' experience.</p><p>This is very promising. In fact, it may be the first step in creating primitive house hold AI.</p><p>OR robotic systems used in manufacturing able to adjust the process as it goes. Using inputs to determine better ways to do a job.</p></htmltext>
<tokenext>" That things over 200 lbs are unlikely to fly .
But wait , 747s are heavier than that .
Oh , we need to know that animals over 200 lbs rarely have the ability of flightwhat ?
He specifically stated birds .
Not Animals , or inanimate objects.It looks like this system can change as it is used , effectivly creating a 'lifetime ' experience.This is very promising .
In fact , it may be the first step in creating primitive house hold AI.OR robotic systems used in manufacturing able to adjust the process as it goes .
Using inputs to determine better ways to do a job .</tokentext>
<sentencetext>"That things over 200 lbs are unlikely to fly.
But wait, 747s are heavier than that.
Oh, we need to know that animals over 200 lbs rarely have the ability of flightwhat?
He specifically stated birds.
Not Animals, or inanimate objects.It looks like this system can  change as it is used, effectivly creating a 'lifetime' experience.This is very promising.
In fact, it may be the first step in creating primitive house hold AI.OR robotic systems used in manufacturing able to adjust the process as it goes.
Using inputs to determine better ways to do a job.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31692406</id>
	<title>Re:Can I get some wafers with that Wine?</title>
	<author>Raffaello</author>
	<datestamp>1270026120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Don't you lose your geek membership card for not knowing who Alonzo Church was?</p></htmltext>
<tokenext>Do n't you lose your geek membership card for not knowing who Alonzo Church was ?</tokentext>
<sentencetext>Don't you lose your geek membership card for not knowing who Alonzo Church was?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688868</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_39</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689236
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690518
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_42</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689236
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31691956
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690592
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_56</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688938
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31697348
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689392
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31691216
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31695374
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_63</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689892
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689628
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689988
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688842
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688930
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690286
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688902
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689408
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31691172
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_48</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689128
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31694052
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689860
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31697220
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_55</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688888
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690526
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688924
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689430
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_54</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31691128
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_45</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689236
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689804
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31698632
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689674
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688842
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688930
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689858
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_61</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688888
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690102
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688842
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688930
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689960
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31694264
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688868
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689026
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689236
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690384
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31692808
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688868
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31694092
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688818
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689158
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688818
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690106
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_46</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688888
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690190
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_37</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689628
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31697176
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_53</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689786
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31691982
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_36</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688868
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31692406
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_38</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689128
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690038
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_43</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690098
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688924
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689222
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688868
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689008
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688842
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689030
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688902
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690522
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689236
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689920
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688902
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689724
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_59</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689300
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_35</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689628
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690150
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_58</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689236
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690384
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31698332
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_49</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688888
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31696458
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_40</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688868
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690160
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689628
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31692774
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688842
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690262
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31692968
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689786
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31693528
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_57</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688938
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689304
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_62</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688842
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689202
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_47</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689628
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690186
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688888
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689126
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31692510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_52</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688888
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689126
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689826
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689690
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689168
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689652
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688976
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689834
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_44</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689050
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_60</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688868
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689496
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_51</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688868
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689042
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688924
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689176
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688888
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689608
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_50</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689628
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31693178
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690564
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_41</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688888
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689476
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688902
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690798
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688842
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688930
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689960
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690302
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_31_1440205_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689128
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31691804
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_31_1440205.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688842
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689202
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690262
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688930
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689858
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689960
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690302
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31694264
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690286
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689030
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_31_1440205.22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689028
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_31_1440205.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690608
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_31_1440205.20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689628
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690150
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689988
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31693178
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31692774
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31697176
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690186
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_31_1440205.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689128
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31691804
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690038
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31694052
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_31_1440205.23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688888
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689126
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689826
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31692510
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690102
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690190
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690526
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31696458
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689608
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689476
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_31_1440205.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688868
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31692406
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31694092
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689026
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690160
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689496
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689042
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689008
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_31_1440205.21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689392
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31691216
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31695374
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_31_1440205.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688882
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690592
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688976
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689834
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689050
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31692968
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690098
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689674
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689236
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689804
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31698632
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690384
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31698332
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31692808
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689920
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31691956
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690518
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689690
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689892
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689300
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690564
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31691128
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_31_1440205.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690050
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_31_1440205.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688924
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689176
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689430
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689222
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_31_1440205.24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688968
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_31_1440205.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689860
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31697220
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_31_1440205.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689714
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_31_1440205.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688902
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690798
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689724
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690522
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689408
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31691172
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_31_1440205.19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31691538
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_31_1440205.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688818
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690106
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689158
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_31_1440205.17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688916
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_31_1440205.18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688938
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689304
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31697348
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_31_1440205.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31688994
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_31_1440205.16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690052
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_31_1440205.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689440
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_31_1440205.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31690984
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_31_1440205.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689786
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31693528
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31691982
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_31_1440205.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689168
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_31_1440205.31689652
</commentlist>
</conversation>
