<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article10_01_16_194238</id>
	<title>CMU Web-Scraping Learns English, One Word At a Time</title>
	<author>timothy</author>
	<datestamp>1263669480000</datestamp>
	<htmltext>blee37 writes <i>"Researchers at Carnegie Mellon have developed a <a href="http://scitedaily.com/the-never-ending-language-learner/">web-scraping AI program that never dies</a>.  It runs continuously, extracting information from the web and using that information to learn more about the English language.  The idea is for a never ending learner like this to one day be able to become conversant in the English language."</i> It's not that the program <em>couldn't</em> stop running; the idea is that there's no fixed end-point. Rather, its progress in categorizing complex word relationships is the object of the research. See also CMU's <a href="http://rtw.ml.cmu.edu/readtheweb.html">"Read the Web" research project</a> site.</htmltext>
<tokenext>blee37 writes " Researchers at Carnegie Mellon have developed a web-scraping AI program that never dies .
It runs continuously , extracting information from the web and using that information to learn more about the English language .
The idea is for a never ending learner like this to one day be able to become conversant in the English language .
" It 's not that the program could n't stop running ; the idea is that there 's no fixed end-point .
Rather , its progress in categorizing complex word relationships is the object of the research .
See also CMU 's " Read the Web " research project site .</tokentext>
<sentencetext>blee37 writes "Researchers at Carnegie Mellon have developed a web-scraping AI program that never dies.
It runs continuously, extracting information from the web and using that information to learn more about the English language.
The idea is for a never ending learner like this to one day be able to become conversant in the English language.
" It's not that the program couldn't stop running; the idea is that there's no fixed end-point.
Rather, its progress in categorizing complex word relationships is the object of the research.
See also CMU's "Read the Web" research project site.</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792560</id>
	<title>The web: What a great source of information</title>
	<author>Anonymous</author>
	<datestamp>1263674940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>&gt;Rather, its progress in categorizing complex word relationships is the object of the research.</p><p>From the web? Half the people here are writing English as a second language; the rest, haven't finished learning the language, or cannot be bother to string a sentence together. Just what is this program going to learn?</p></htmltext>
<tokenext>&gt; Rather , its progress in categorizing complex word relationships is the object of the research.From the web ?
Half the people here are writing English as a second language ; the rest , have n't finished learning the language , or can not be bother to string a sentence together .
Just what is this program going to learn ?</tokentext>
<sentencetext>&gt;Rather, its progress in categorizing complex word relationships is the object of the research.From the web?
Half the people here are writing English as a second language; the rest, haven't finished learning the language, or cannot be bother to string a sentence together.
Just what is this program going to learn?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30796434</id>
	<title>Re:Uh oh...</title>
	<author>Rocketship Underpant</author>
	<datestamp>1263670920000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Yes, database pollution sounds like a problem to me. Not only do you have to deal with AOL-speak and horrific spelling disasters of every kind, there's the issue of broken English and nonsensical English produced through machine translation, which shows up on corporate websites a lot more than it should.</p></htmltext>
<tokenext>Yes , database pollution sounds like a problem to me .
Not only do you have to deal with AOL-speak and horrific spelling disasters of every kind , there 's the issue of broken English and nonsensical English produced through machine translation , which shows up on corporate websites a lot more than it should .</tokentext>
<sentencetext>Yes, database pollution sounds like a problem to me.
Not only do you have to deal with AOL-speak and horrific spelling disasters of every kind, there's the issue of broken English and nonsensical English produced through machine translation, which shows up on corporate websites a lot more than it should.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792326</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30797000</id>
	<title>Anonymous coward</title>
	<author>Anonymous</author>
	<datestamp>1263725340000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>garbage in - garbage out</p></htmltext>
<tokenext>garbage in - garbage out</tokentext>
<sentencetext>garbage in - garbage out</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793624</id>
	<title>Re:Machine learning algorithms</title>
	<author>poopdeville</author>
	<datestamp>1263640440000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>It's not as if human use of "machine learning" algorithms is any faster.  It takes about 12 months for our neural networks to figure out that the noises we make elicit a response from our parents.  And according to people like Chomsky, our neural networks are designed for language acquisition.</p><p>AI "ought" to be an easy problem.  But there's one big difference in the psychology of humans, and of computers.  Humans have drives, like hunger, the sex drive, and so on.  In particular, an infants' drive to eat is a major component in its will to learn language.  But this drive to eat has other psychological manifestations.</p><p>It is difficult to imagine a programmatic "generalized goal system" that mirrors the role of human drives in learning.  The "goals", usually, are to maximize fitness in a particular domain.  A real human has to maintain sufficient fitness in multiple domains, in order to survive.</p><p>This should not be so surprising.  Human evolution has about 300,000 generations of improvements on the brain since we first stood up.  Our drives are clearly genetically programmed, and are just as hard wired as a machine learning algorithms' "drive" to maximize.  The human drive is just much more nuanced, and informed about the real world.  There is a model of the world in our genes.  It is unfair to expect that a computer will ever be "smart" without one.</p></htmltext>
<tokenext>It 's not as if human use of " machine learning " algorithms is any faster .
It takes about 12 months for our neural networks to figure out that the noises we make elicit a response from our parents .
And according to people like Chomsky , our neural networks are designed for language acquisition.AI " ought " to be an easy problem .
But there 's one big difference in the psychology of humans , and of computers .
Humans have drives , like hunger , the sex drive , and so on .
In particular , an infants ' drive to eat is a major component in its will to learn language .
But this drive to eat has other psychological manifestations.It is difficult to imagine a programmatic " generalized goal system " that mirrors the role of human drives in learning .
The " goals " , usually , are to maximize fitness in a particular domain .
A real human has to maintain sufficient fitness in multiple domains , in order to survive.This should not be so surprising .
Human evolution has about 300,000 generations of improvements on the brain since we first stood up .
Our drives are clearly genetically programmed , and are just as hard wired as a machine learning algorithms ' " drive " to maximize .
The human drive is just much more nuanced , and informed about the real world .
There is a model of the world in our genes .
It is unfair to expect that a computer will ever be " smart " without one .</tokentext>
<sentencetext>It's not as if human use of "machine learning" algorithms is any faster.
It takes about 12 months for our neural networks to figure out that the noises we make elicit a response from our parents.
And according to people like Chomsky, our neural networks are designed for language acquisition.AI "ought" to be an easy problem.
But there's one big difference in the psychology of humans, and of computers.
Humans have drives, like hunger, the sex drive, and so on.
In particular, an infants' drive to eat is a major component in its will to learn language.
But this drive to eat has other psychological manifestations.It is difficult to imagine a programmatic "generalized goal system" that mirrors the role of human drives in learning.
The "goals", usually, are to maximize fitness in a particular domain.
A real human has to maintain sufficient fitness in multiple domains, in order to survive.This should not be so surprising.
Human evolution has about 300,000 generations of improvements on the brain since we first stood up.
Our drives are clearly genetically programmed, and are just as hard wired as a machine learning algorithms' "drive" to maximize.
The human drive is just much more nuanced, and informed about the real world.
There is a model of the world in our genes.
It is unfair to expect that a computer will ever be "smart" without one.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792374</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792580</id>
	<title>V*yger 2.0 ?</title>
	<author>LifesABeach</author>
	<datestamp>1263675240000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>The concept is intriguing, "Create a program that learns all there is to know, off the net."  What amazes me is that others don't try the same thing.  It doesn't take a team of A.I. types from Stamford to kick start this program.  The cost is a Netbook, even Nigerian Princes could afford this.  I'm trying figure out how economic competitors could take advantage of this.  I can see how the U.S.P.T. could use this to help evaluate prior art, and common usage.  I'm thinking that an interface to a "Real World Simulator" would be the next step toward usefulness.</htmltext>
<tokenext>The concept is intriguing , " Create a program that learns all there is to know , off the net .
" What amazes me is that others do n't try the same thing .
It does n't take a team of A.I .
types from Stamford to kick start this program .
The cost is a Netbook , even Nigerian Princes could afford this .
I 'm trying figure out how economic competitors could take advantage of this .
I can see how the U.S.P.T .
could use this to help evaluate prior art , and common usage .
I 'm thinking that an interface to a " Real World Simulator " would be the next step toward usefulness .</tokentext>
<sentencetext>The concept is intriguing, "Create a program that learns all there is to know, off the net.
"  What amazes me is that others don't try the same thing.
It doesn't take a team of A.I.
types from Stamford to kick start this program.
The cost is a Netbook, even Nigerian Princes could afford this.
I'm trying figure out how economic competitors could take advantage of this.
I can see how the U.S.P.T.
could use this to help evaluate prior art, and common usage.
I'm thinking that an interface to a "Real World Simulator" would be the next step toward usefulness.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792434</id>
	<title>Iz dis...</title>
	<author>MrBandersnatch</author>
	<datestamp>1263673920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>lke, rally der bestest ways like ter learn a puter inglish isit!!!??!?!</p><p>Seriously though, poor AI; if I had a gun I'd go and put it out of its misery.</p></htmltext>
<tokenext>lke , rally der bestest ways like ter learn a puter inglish isit ! ! ! ? ? ! ?
! Seriously though , poor AI ; if I had a gun I 'd go and put it out of its misery .</tokentext>
<sentencetext>lke, rally der bestest ways like ter learn a puter inglish isit!!!??!?
!Seriously though, poor AI; if I had a gun I'd go and put it out of its misery.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792700</id>
	<title>Re:Finally, people are getting AI right.</title>
	<author>Korbeau</author>
	<datestamp>1263633060000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>I'm glad a lot of research is finally gearing more towards the path of having a small initial program, then feeding it data and letting it grow into it's own intelligence.</p></div><p>This idea is the holy grail of AI since the early ages.  The project described is one amongst thousands done, and you'll likely see news about such projects pop every couple of months here on Slashdot.</p><p>The problem is that such a project has yet to produce interesting results.  The reason why the most successful AI projects you hear about are human-organized databases and expert-systems, or human-trained neural networks for instance, is because they are the only ones that produce useful results.</p><p>Also, consider that we are not talking about "pixel-ants" that only have very few possible inputs and outputs, but we are talking about a system that understand and do something meaningful with natural language, something a normal human being doesn't completely grasps until he is at least a teenager, with the constant help of parents, friends, teachers, television etc. all along these years.</p></div>
	</htmltext>
<tokenext>I 'm glad a lot of research is finally gearing more towards the path of having a small initial program , then feeding it data and letting it grow into it 's own intelligence.This idea is the holy grail of AI since the early ages .
The project described is one amongst thousands done , and you 'll likely see news about such projects pop every couple of months here on Slashdot.The problem is that such a project has yet to produce interesting results .
The reason why the most successful AI projects you hear about are human-organized databases and expert-systems , or human-trained neural networks for instance , is because they are the only ones that produce useful results.Also , consider that we are not talking about " pixel-ants " that only have very few possible inputs and outputs , but we are talking about a system that understand and do something meaningful with natural language , something a normal human being does n't completely grasps until he is at least a teenager , with the constant help of parents , friends , teachers , television etc .
all along these years .</tokentext>
<sentencetext>I'm glad a lot of research is finally gearing more towards the path of having a small initial program, then feeding it data and letting it grow into it's own intelligence.This idea is the holy grail of AI since the early ages.
The project described is one amongst thousands done, and you'll likely see news about such projects pop every couple of months here on Slashdot.The problem is that such a project has yet to produce interesting results.
The reason why the most successful AI projects you hear about are human-organized databases and expert-systems, or human-trained neural networks for instance, is because they are the only ones that produce useful results.Also, consider that we are not talking about "pixel-ants" that only have very few possible inputs and outputs, but we are talking about a system that understand and do something meaningful with natural language, something a normal human being doesn't completely grasps until he is at least a teenager, with the constant help of parents, friends, teachers, television etc.
all along these years.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792368</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30796902</id>
	<title>Re:Non english text</title>
	<author>ArcadeNut</author>
	<datestamp>1263723420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I assume it would be promoted to slashdot editor...</p></htmltext>
<tokenext>I assume it would be promoted to slashdot editor.. .</tokentext>
<sentencetext>I assume it would be promoted to slashdot editor...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792404</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792610</id>
	<title>It's a hoax like Forum 2000</title>
	<author>Anonymous</author>
	<datestamp>1263675480000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>It's just another CMU hoax like <a href="http://web.archive.org/web/*/http://www.forum2000.org" title="archive.org" rel="nofollow">Forum 2000</a> [archive.org]. Read <a href="http://slashdot.org/article.pl?sid=00/08/13/0213225" title="slashdot.org" rel="nofollow">End of an Era: Forum 2000 Closes</a> [slashdot.org] for details.</p><p>Greetings to Corey Kosak, Andrej Bauer and the Forum 2000 students for all the laughs.</p></htmltext>
<tokenext>It 's just another CMU hoax like Forum 2000 [ archive.org ] .
Read End of an Era : Forum 2000 Closes [ slashdot.org ] for details.Greetings to Corey Kosak , Andrej Bauer and the Forum 2000 students for all the laughs .</tokentext>
<sentencetext>It's just another CMU hoax like Forum 2000 [archive.org].
Read End of an Era: Forum 2000 Closes [slashdot.org] for details.Greetings to Corey Kosak, Andrej Bauer and the Forum 2000 students for all the laughs.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792522</id>
	<title>I think AI needs a 3d imagination to know English</title>
	<author>Anonymous</author>
	<datestamp>1263674640000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>Once a computer understands 3d objects with English names, it can then have an imagination to know how these objects interact with each other.  Of course writing imagination space that simulates real life is exceedingly difficult and I don't see anyone doing it for several years if not a decade just to start.</htmltext>
<tokenext>Once a computer understands 3d objects with English names , it can then have an imagination to know how these objects interact with each other .
Of course writing imagination space that simulates real life is exceedingly difficult and I do n't see anyone doing it for several years if not a decade just to start .</tokentext>
<sentencetext>Once a computer understands 3d objects with English names, it can then have an imagination to know how these objects interact with each other.
Of course writing imagination space that simulates real life is exceedingly difficult and I don't see anyone doing it for several years if not a decade just to start.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792512</id>
	<title>Obligatory</title>
	<author>Palpatine\_li</author>
	<datestamp>1263674520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>...should we start welcoming the Mailman (as in True Names)?</htmltext>
<tokenext>...should we start welcoming the Mailman ( as in True Names ) ?</tokentext>
<sentencetext>...should we start welcoming the Mailman (as in True Names)?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792394</id>
	<title>lolwut?</title>
	<author>SanityInAnarchy</author>
	<datestamp>1263673620000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext><p>Why do I get the feeling that the bot's first words are going to be OMGWTFBBQ?</p></htmltext>
<tokenext>Why do I get the feeling that the bot 's first words are going to be OMGWTFBBQ ?</tokentext>
<sentencetext>Why do I get the feeling that the bot's first words are going to be OMGWTFBBQ?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30796972</id>
	<title>McDonalds</title>
	<author>Anonymous</author>
	<datestamp>1263724620000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Is this technology available for the employees at the local McDonalds?</p></htmltext>
<tokenext>Is this technology available for the employees at the local McDonalds ?</tokentext>
<sentencetext>Is this technology available for the employees at the local McDonalds?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792326</id>
	<title>Uh oh...</title>
	<author>hampton</author>
	<datestamp>1263673260000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext><p>What happens when it discovers lolcats?</p></htmltext>
<tokenext>What happens when it discovers lolcats ?</tokentext>
<sentencetext>What happens when it discovers lolcats?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30796640</id>
	<title>Supervised learning, maybe</title>
	<author>Animats</author>
	<datestamp>1263761160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>
The article has too much hype, but the actual work has some potential.
For the limited problem they're really addressing, extracting certain data about sports teams and corporate mergers, this approach might work.
</p><p>
Both of those areas have the property that you can get structured data feeds on the subject.  Bloomberg will sell you access to databases which report mergers in a machine-processable way; some stock analysis programs need that data. Sports statistics are, of course, available on line.  So the program's extraction of that info from news stories intended for humans can be checked.  This allows supervised learning.  The program can tell what it got right and what it got wrong.
</p><p>
When they can distinguish between a merger that's being talked about, one which entered negotiations but was not completed, one which went for DoJ approval and was rejected, and one which was completed, they'll have something. Until then, they're probably won't outperform "'merger' NEAR 'companyname'"  queries.</p></htmltext>
<tokenext>The article has too much hype , but the actual work has some potential .
For the limited problem they 're really addressing , extracting certain data about sports teams and corporate mergers , this approach might work .
Both of those areas have the property that you can get structured data feeds on the subject .
Bloomberg will sell you access to databases which report mergers in a machine-processable way ; some stock analysis programs need that data .
Sports statistics are , of course , available on line .
So the program 's extraction of that info from news stories intended for humans can be checked .
This allows supervised learning .
The program can tell what it got right and what it got wrong .
When they can distinguish between a merger that 's being talked about , one which entered negotiations but was not completed , one which went for DoJ approval and was rejected , and one which was completed , they 'll have something .
Until then , they 're probably wo n't outperform " 'merger ' NEAR 'companyname ' " queries .</tokentext>
<sentencetext>
The article has too much hype, but the actual work has some potential.
For the limited problem they're really addressing, extracting certain data about sports teams and corporate mergers, this approach might work.
Both of those areas have the property that you can get structured data feeds on the subject.
Bloomberg will sell you access to databases which report mergers in a machine-processable way; some stock analysis programs need that data.
Sports statistics are, of course, available on line.
So the program's extraction of that info from news stories intended for humans can be checked.
This allows supervised learning.
The program can tell what it got right and what it got wrong.
When they can distinguish between a merger that's being talked about, one which entered negotiations but was not completed, one which went for DoJ approval and was rejected, and one which was completed, they'll have something.
Until then, they're probably won't outperform "'merger' NEAR 'companyname'"  queries.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793632</id>
	<title>Re:Pruning</title>
	<author>mhelander</author>
	<datestamp>1263640500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>But it is potentially much easier for a computer to identify and address conflicting data points than for a human who, for some reason, seems susceptible to blinding themselves to such issues (cognitive dissonance).</p><p>When you have three data points, one claiming George Washington was a human, another claiming George Washington had 50 arms and a third claiming it is highly unusual for humans to have more than two arms (and more than ten arms would be unheard of), the computer could easily detect the logical conflict, flag data points as inconsistent and have a good idea for a topic about which to research more facts, potentially to establish sophisticated probabilities as to which claim is more likely to be bogus than the other.</p><p>This example might not provoke cognitive dissonance for many humans, rather it was intended as an easy-to-follow example of how a computer can improve its understanding of the world even in the face of disinformation, using logic and probability as guiding tools. Once that is easy to see, it follows how this also applies in situations where humans might be more susceptible to cognitive dissonance.</p></htmltext>
<tokenext>But it is potentially much easier for a computer to identify and address conflicting data points than for a human who , for some reason , seems susceptible to blinding themselves to such issues ( cognitive dissonance ) .When you have three data points , one claiming George Washington was a human , another claiming George Washington had 50 arms and a third claiming it is highly unusual for humans to have more than two arms ( and more than ten arms would be unheard of ) , the computer could easily detect the logical conflict , flag data points as inconsistent and have a good idea for a topic about which to research more facts , potentially to establish sophisticated probabilities as to which claim is more likely to be bogus than the other.This example might not provoke cognitive dissonance for many humans , rather it was intended as an easy-to-follow example of how a computer can improve its understanding of the world even in the face of disinformation , using logic and probability as guiding tools .
Once that is easy to see , it follows how this also applies in situations where humans might be more susceptible to cognitive dissonance .</tokentext>
<sentencetext>But it is potentially much easier for a computer to identify and address conflicting data points than for a human who, for some reason, seems susceptible to blinding themselves to such issues (cognitive dissonance).When you have three data points, one claiming George Washington was a human, another claiming George Washington had 50 arms and a third claiming it is highly unusual for humans to have more than two arms (and more than ten arms would be unheard of), the computer could easily detect the logical conflict, flag data points as inconsistent and have a good idea for a topic about which to research more facts, potentially to establish sophisticated probabilities as to which claim is more likely to be bogus than the other.This example might not provoke cognitive dissonance for many humans, rather it was intended as an easy-to-follow example of how a computer can improve its understanding of the world even in the face of disinformation, using logic and probability as guiding tools.
Once that is easy to see, it follows how this also applies in situations where humans might be more susceptible to cognitive dissonance.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792540</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30796254</id>
	<title>Convergence</title>
	<author>Metasquares</author>
	<datestamp>1263667260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Eventually, at least the learning component will converge; returns will diminish for feeding it more data. This is particularly true given the independence assumption inherent in their classifier (but would also hold on stronger learners). I suspect that this will happen to the reader component as well. If it were as simple as applying Naive Bayes to classify on a corpus of text connected to a knowledge base (which is probably just a set of posteriors left from previous training sessions), Cyc would have already passed the Turing test.</htmltext>
<tokenext>Eventually , at least the learning component will converge ; returns will diminish for feeding it more data .
This is particularly true given the independence assumption inherent in their classifier ( but would also hold on stronger learners ) .
I suspect that this will happen to the reader component as well .
If it were as simple as applying Naive Bayes to classify on a corpus of text connected to a knowledge base ( which is probably just a set of posteriors left from previous training sessions ) , Cyc would have already passed the Turing test .</tokentext>
<sentencetext>Eventually, at least the learning component will converge; returns will diminish for feeding it more data.
This is particularly true given the independence assumption inherent in their classifier (but would also hold on stronger learners).
I suspect that this will happen to the reader component as well.
If it were as simple as applying Naive Bayes to classify on a corpus of text connected to a knowledge base (which is probably just a set of posteriors left from previous training sessions), Cyc would have already passed the Turing test.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792746</id>
	<title>On December 11, 2012...</title>
	<author>Anonymous</author>
	<datestamp>1263633360000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>On December 11, 2012, NELL encounters MySpace.</p><p>On December 12, 2012 it becomes sentient but very emo, and destroys the world.</p></htmltext>
<tokenext>On December 11 , 2012 , NELL encounters MySpace.On December 12 , 2012 it becomes sentient but very emo , and destroys the world .</tokentext>
<sentencetext>On December 11, 2012, NELL encounters MySpace.On December 12, 2012 it becomes sentient but very emo, and destroys the world.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793136</id>
	<title>Re:Once this thing hits Encyclopedia Dramatica...</title>
	<author>MrBandersnatch</author>
	<datestamp>1263636660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You're giving 4chan users credit for a lot of maturity there....</p></htmltext>
<tokenext>You 're giving 4chan users credit for a lot of maturity there... .</tokentext>
<sentencetext>You're giving 4chan users credit for a lot of maturity there....</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792446</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30796788</id>
	<title>Re:Machine learning algorithms</title>
	<author>fatphil</author>
	<datestamp>1263721200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>How can you say that?! RTW already "typically contains more information about companies (e.g., SAP , Hyundai) and sports teams (e.g., Bulls , Mets) than other entity types."<br><br>And here's what it knows about the Bulls:<br><br>"""<br>bulls<br>generalizations<br>sports\_team<br>source<br>OLv1-Iter:0-From:sports\_team 2009/03/19-09:41:52 rtw-full  Seal-using-OLversion1 2009/02/26-06:52:20 full-relation-test<br>probability<br>0.98752<br>literalString<br>Bulls bulls<br>plays\_against<br>cavs blazers knicks<br>source<br>OLv1-Iter:2-From:plays\_against 2009/03/19-09:41:52 rtw-full  OLv1-Iter:2-From:plays\_against 2009/03/19-09:41:52 rtw-full fromInverse   OLv1-Iter:2-From:plays\_against 2009/03/19-09:41:52 rtw-full fromInverse  OLv1-Iter:2-From:plays\_against 2009/03/19-09:41:52 rtw-full   OLv1-Iter:6-From:plays\_against 2009/03/19-09:41:52 rtw-full  OLv1-Iter:6-From:plays\_against 2009/03/19-09:41:52 rtw-full fromInverse<br>probability<br>0.9 0.9 0.9<br>team\_members<br>michael\_jordan ben\_gordon<br>source<br>OLv1-Iter:0-From:plays\_for 2009/03/19-09:41:52 rtw-full fromInverse   OLv1-Iter:10-From:plays\_for 2009/03/19-09:41:52 rtw-full fromInverse<br>probability<br>0.9 0.9<br>plays\_sport\_team<br>basketball<br>source<br>OLv1-Iter:11-From:plays\_sport\_team 2009/03/19-09:41:52 rtw-full<br>probability<br>0.9<br>"""<br><br>So the bulls is almost certainly a sports team, and very likely plays basketball! Stop the presses - that's almost as much information as can be gleaned by doing the search:<br>
&nbsp; &nbsp; "chicago bulls are *" site:wikipedia.org<br>(But far less than if you actually follow any links or read more than the first sentence returned.)</htmltext>
<tokenext>How can you say that ? !
RTW already " typically contains more information about companies ( e.g. , SAP , Hyundai ) and sports teams ( e.g. , Bulls , Mets ) than other entity types .
" And here 's what it knows about the Bulls : " " " bullsgeneralizationssports \ _teamsourceOLv1-Iter : 0-From : sports \ _team 2009/03/19-09 : 41 : 52 rtw-full Seal-using-OLversion1 2009/02/26-06 : 52 : 20 full-relation-testprobability0.98752literalStringBulls bullsplays \ _againstcavs blazers knickssourceOLv1-Iter : 2-From : plays \ _against 2009/03/19-09 : 41 : 52 rtw-full OLv1-Iter : 2-From : plays \ _against 2009/03/19-09 : 41 : 52 rtw-full fromInverse OLv1-Iter : 2-From : plays \ _against 2009/03/19-09 : 41 : 52 rtw-full fromInverse OLv1-Iter : 2-From : plays \ _against 2009/03/19-09 : 41 : 52 rtw-full OLv1-Iter : 6-From : plays \ _against 2009/03/19-09 : 41 : 52 rtw-full OLv1-Iter : 6-From : plays \ _against 2009/03/19-09 : 41 : 52 rtw-full fromInverseprobability0.9 0.9 0.9team \ _membersmichael \ _jordan ben \ _gordonsourceOLv1-Iter : 0-From : plays \ _for 2009/03/19-09 : 41 : 52 rtw-full fromInverse OLv1-Iter : 10-From : plays \ _for 2009/03/19-09 : 41 : 52 rtw-full fromInverseprobability0.9 0.9plays \ _sport \ _teambasketballsourceOLv1-Iter : 11-From : plays \ _sport \ _team 2009/03/19-09 : 41 : 52 rtw-fullprobability0.9 " " " So the bulls is almost certainly a sports team , and very likely plays basketball !
Stop the presses - that 's almost as much information as can be gleaned by doing the search :     " chicago bulls are * " site : wikipedia.org ( But far less than if you actually follow any links or read more than the first sentence returned .
)</tokentext>
<sentencetext>How can you say that?!
RTW already "typically contains more information about companies (e.g., SAP , Hyundai) and sports teams (e.g., Bulls , Mets) than other entity types.
"And here's what it knows about the Bulls:"""bullsgeneralizationssports\_teamsourceOLv1-Iter:0-From:sports\_team 2009/03/19-09:41:52 rtw-full  Seal-using-OLversion1 2009/02/26-06:52:20 full-relation-testprobability0.98752literalStringBulls bullsplays\_againstcavs blazers knickssourceOLv1-Iter:2-From:plays\_against 2009/03/19-09:41:52 rtw-full  OLv1-Iter:2-From:plays\_against 2009/03/19-09:41:52 rtw-full fromInverse   OLv1-Iter:2-From:plays\_against 2009/03/19-09:41:52 rtw-full fromInverse  OLv1-Iter:2-From:plays\_against 2009/03/19-09:41:52 rtw-full   OLv1-Iter:6-From:plays\_against 2009/03/19-09:41:52 rtw-full  OLv1-Iter:6-From:plays\_against 2009/03/19-09:41:52 rtw-full fromInverseprobability0.9 0.9 0.9team\_membersmichael\_jordan ben\_gordonsourceOLv1-Iter:0-From:plays\_for 2009/03/19-09:41:52 rtw-full fromInverse   OLv1-Iter:10-From:plays\_for 2009/03/19-09:41:52 rtw-full fromInverseprobability0.9 0.9plays\_sport\_teambasketballsourceOLv1-Iter:11-From:plays\_sport\_team 2009/03/19-09:41:52 rtw-fullprobability0.9"""So the bulls is almost certainly a sports team, and very likely plays basketball!
Stop the presses - that's almost as much information as can be gleaned by doing the search:
    "chicago bulls are *" site:wikipedia.org(But far less than if you actually follow any links or read more than the first sentence returned.
)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792374</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792530</id>
	<title>while (1)</title>
	<author>Anonymous</author>
	<datestamp>1263674700000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>Yeah, I've coded an infinite loop a few times, how come I never made the headlines on Slashdot?</p></htmltext>
<tokenext>Yeah , I 've coded an infinite loop a few times , how come I never made the headlines on Slashdot ?</tokentext>
<sentencetext>Yeah, I've coded an infinite loop a few times, how come I never made the headlines on Slashdot?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30796210</id>
	<title>Re:Uh oh...</title>
	<author>SEWilco</author>
	<datestamp>1263666540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>What happens when it discovers<nobr> <wbr></nobr>/.? It will be able to argue incomprehensibly and illogically for hours on end.</p></div></blockquote><p>
The first thing it will do is stop reading other web pages.<br>
Then it will opine about them.</p></div>
	</htmltext>
<tokenext>What happens when it discovers /. ?
It will be able to argue incomprehensibly and illogically for hours on end .
The first thing it will do is stop reading other web pages .
Then it will opine about them .</tokentext>
<sentencetext>What happens when it discovers /.?
It will be able to argue incomprehensibly and illogically for hours on end.
The first thing it will do is stop reading other web pages.
Then it will opine about them.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793376</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792778</id>
	<title>Re:Finally, people are getting AI right.</title>
	<author>Garble Snarky</author>
	<datestamp>1263633600000</datestamp>
	<modclass>None</modclass>
	<modscore>2</modscore>
	<htmltext>Fortunately, we have the advantage of being able to observe the current state of numerous natural intelligence systems that do work very well. Surely this can help guide us to a simple basic structure that can eventually exhibit emergent intelligence?</htmltext>
<tokenext>Fortunately , we have the advantage of being able to observe the current state of numerous natural intelligence systems that do work very well .
Surely this can help guide us to a simple basic structure that can eventually exhibit emergent intelligence ?</tokentext>
<sentencetext>Fortunately, we have the advantage of being able to observe the current state of numerous natural intelligence systems that do work very well.
Surely this can help guide us to a simple basic structure that can eventually exhibit emergent intelligence?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792510</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30795650</id>
	<title>Re:Will be this article read by that program?</title>
	<author>linguizic</author>
	<datestamp>1263657900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>I am <b>the the</b> Carnie Mellon reader, I have discovered with this article that I am robot.</p></div><p>You seem to have learned written English just like it's exists on the web, typos and all</p></div>
	</htmltext>
<tokenext>I am the the Carnie Mellon reader , I have discovered with this article that I am robot.You seem to have learned written English just like it 's exists on the web , typos and all</tokentext>
<sentencetext>I am the the Carnie Mellon reader, I have discovered with this article that I am robot.You seem to have learned written English just like it's exists on the web, typos and all
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792354</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792510</id>
	<title>Re:Finally, people are getting AI right.</title>
	<author>Anonymous</author>
	<datestamp>1263674520000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>You're advocating the "emergent intelligence" model of AI, where intelligence "somehow" is created by the confluence of lots of data. This has been a dream since the concept of AI started and is the basis for numerous movies with an AI topic. In practice the degrees of freedom which unstructured data provides far exceed the capability of current (and likely future) computers. It is not how natural intelligence works either: The structure of neural networks is very specifically adapted to their "purpose". They only learn within these structural parameters. Depending on your choice of religion, the structure is the result of divine intervention or millions of years of chance and evolution. When building AI systems, the problem has always been to find the appropriate structure or features. What has increased is the complexity of the features that we can feed into AI systems, which also increases the degrees of freedom for a particular AI system, but those are still not "free" learning machines.</p></htmltext>
<tokenext>You 're advocating the " emergent intelligence " model of AI , where intelligence " somehow " is created by the confluence of lots of data .
This has been a dream since the concept of AI started and is the basis for numerous movies with an AI topic .
In practice the degrees of freedom which unstructured data provides far exceed the capability of current ( and likely future ) computers .
It is not how natural intelligence works either : The structure of neural networks is very specifically adapted to their " purpose " .
They only learn within these structural parameters .
Depending on your choice of religion , the structure is the result of divine intervention or millions of years of chance and evolution .
When building AI systems , the problem has always been to find the appropriate structure or features .
What has increased is the complexity of the features that we can feed into AI systems , which also increases the degrees of freedom for a particular AI system , but those are still not " free " learning machines .</tokentext>
<sentencetext>You're advocating the "emergent intelligence" model of AI, where intelligence "somehow" is created by the confluence of lots of data.
This has been a dream since the concept of AI started and is the basis for numerous movies with an AI topic.
In practice the degrees of freedom which unstructured data provides far exceed the capability of current (and likely future) computers.
It is not how natural intelligence works either: The structure of neural networks is very specifically adapted to their "purpose".
They only learn within these structural parameters.
Depending on your choice of religion, the structure is the result of divine intervention or millions of years of chance and evolution.
When building AI systems, the problem has always been to find the appropriate structure or features.
What has increased is the complexity of the features that we can feed into AI systems, which also increases the degrees of freedom for a particular AI system, but those are still not "free" learning machines.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792368</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792964</id>
	<title>Re:already been done</title>
	<author>blee37</author>
	<datestamp>1263635160000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext>Cyc is a controversial project in the AI community, and I'm glad that you brought it up.  I don't think anyone yet knows how to use a database of commonsense facts, which is what Cyc is (though limited - the open source version only has a few hundred thousand facts) and which is one thing NELL could create.  However, researchers continue to think about ways that an AI could use knowledge of the real world.  There are numerous publications based on Cyc: <a href="http://www.opencyc.org/cyc/technology/pubs" title="opencyc.org">http://www.opencyc.org/cyc/technology/pubs</a> [opencyc.org].</htmltext>
<tokenext>Cyc is a controversial project in the AI community , and I 'm glad that you brought it up .
I do n't think anyone yet knows how to use a database of commonsense facts , which is what Cyc is ( though limited - the open source version only has a few hundred thousand facts ) and which is one thing NELL could create .
However , researchers continue to think about ways that an AI could use knowledge of the real world .
There are numerous publications based on Cyc : http : //www.opencyc.org/cyc/technology/pubs [ opencyc.org ] .</tokentext>
<sentencetext>Cyc is a controversial project in the AI community, and I'm glad that you brought it up.
I don't think anyone yet knows how to use a database of commonsense facts, which is what Cyc is (though limited - the open source version only has a few hundred thousand facts) and which is one thing NELL could create.
However, researchers continue to think about ways that an AI could use knowledge of the real world.
There are numerous publications based on Cyc: http://www.opencyc.org/cyc/technology/pubs [opencyc.org].</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792588</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792404</id>
	<title>Non english text</title>
	<author>Bert64</author>
	<datestamp>1263673740000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>What happens when this program stumbles across text written in a language other than english? Or how about random nonsensical text? How does it know that the text it learns from is genuine english text?</p></htmltext>
<tokenext>What happens when this program stumbles across text written in a language other than english ?
Or how about random nonsensical text ?
How does it know that the text it learns from is genuine english text ?</tokentext>
<sentencetext>What happens when this program stumbles across text written in a language other than english?
Or how about random nonsensical text?
How does it know that the text it learns from is genuine english text?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30794366</id>
	<title>Re:Uh oh...</title>
	<author>Anonymous</author>
	<datestamp>1263645180000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>this: http://www.youtube.com/watch?v=aftwl354md8</p></htmltext>
<tokenext>this : http : //www.youtube.com/watch ? v = aftwl354md8</tokentext>
<sentencetext>this: http://www.youtube.com/watch?v=aftwl354md8</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792326</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30794500</id>
	<title>Re:lolwut?</title>
	<author>dangitman</author>
	<datestamp>1263646260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Why do I get the feeling that the bot's first words are going to be OMGWTFBBQ?</p></div><p>Except that is not a word, let alone words.</p></div>
	</htmltext>
<tokenext>Why do I get the feeling that the bot 's first words are going to be OMGWTFBBQ ? Except that is not a word , let alone words .</tokentext>
<sentencetext>Why do I get the feeling that the bot's first words are going to be OMGWTFBBQ?Except that is not a word, let alone words.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792394</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30795696</id>
	<title>Re:lolwut?</title>
	<author>linguizic</author>
	<datestamp>1263658500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Nah, it's first words are going to be "Prolong your shlong and go all day long".</htmltext>
<tokenext>Nah , it 's first words are going to be " Prolong your shlong and go all day long " .</tokentext>
<sentencetext>Nah, it's first words are going to be "Prolong your shlong and go all day long".</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792394</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792332</id>
	<title>It could be worse</title>
	<author>davidwr</author>
	<datestamp>1263673320000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>It could be scraping SMS messages.</p><p>On the up-side, at least then it would learn teen-speak.</p></htmltext>
<tokenext>It could be scraping SMS messages.On the up-side , at least then it would learn teen-speak .</tokentext>
<sentencetext>It could be scraping SMS messages.On the up-side, at least then it would learn teen-speak.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792940</id>
	<title>Re:The web: What a great source of information</title>
	<author>Anonymous</author>
	<datestamp>1263634980000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>"can't be bothered" rather than "can't be bother"</p><p>There should be a word for this type of error. You've illustrated your point through your own mistake.</p></htmltext>
<tokenext>" ca n't be bothered " rather than " ca n't be bother " There should be a word for this type of error .
You 've illustrated your point through your own mistake .</tokentext>
<sentencetext>"can't be bothered" rather than "can't be bother"There should be a word for this type of error.
You've illustrated your point through your own mistake.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792560</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793376</id>
	<title>Re:Uh oh...</title>
	<author>icepick72</author>
	<datestamp>1263638820000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext>What happens when it discovers<nobr> <wbr></nobr>/.? It will be able to argue incomprehensibly and illogically for hours on end.</htmltext>
<tokenext>What happens when it discovers /. ?
It will be able to argue incomprehensibly and illogically for hours on end .</tokentext>
<sentencetext>What happens when it discovers /.?
It will be able to argue incomprehensibly and illogically for hours on end.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792326</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792354</id>
	<title>Will be this article read by that program?</title>
	<author>Anonymous</author>
	<datestamp>1263673440000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext>I am the the Carnie Mellon reader, I have discovered with this article that I am robot.</htmltext>
<tokenext>I am the the Carnie Mellon reader , I have discovered with this article that I am robot .</tokentext>
<sentencetext>I am the the Carnie Mellon reader, I have discovered with this article that I am robot.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30796488</id>
	<title>Re:Finally, people are getting AI right.</title>
	<author>TapeCutter</author>
	<datestamp>1263671940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><i>"You're advocating the "emergent intelligence" model of AI, where intelligence "somehow" is created by the confluence of lots of data...[snip]...In practice the degrees of freedom which unstructured data provides far exceed the capability of current (and likely future) computers."</i>
<br> <br>
<a href="http://bluebrain.epfl.ch/" title="bluebrain.epfl.ch">You sure about that?</a> [bluebrain.epfl.ch]. They have already created a <b>molecular level</b> model of the mammalian neocortex and the expected date for completion of a full model of the mammalian brain is solely dependent on the amount of money thrown at it. The model neocortex can already faithfully recreate patterns seen in fMRI scans. If given the first part of a pattern it will acurately reproduce the rest of it. The project is mainly geared toward medicine but they have also inserted the model into an artifical world in order to study it's capacity for learning.
<br> <br>
<i>Depending on your choice of religion, the structure is the result of divine intervention or millions of years of chance and evolution. When building AI systems, the problem has always been to find the appropriate structure or features.</i>
<br> <br>
The dualisim of Descates has been thouroughly debunked and I'm sure you are aware that evolution is not a religion. The mind does "somehow" emerge from the brain's deterministic processing of a continuous avalanche of unstructured data. Looking for the structure of mind is like looking for the structure of fog from within the fog bank. This is why it's called the <i>hard problem</i> of conciousness, the mistake most people make is that we need to solve that problem before we can create an artificial mind. After all the pyramids were built with levers long before the greeks came along and explained why a lever "somehow" inreases the power of the person using it.
<br> <br>
The real question is will we recognise an artificial mind if one emerges from an artificial brain. It's unlikely that such a mind would pass the turing test but we already have lots of examples of minds in our mammalian cousins that are also unable to pass the turing test.</htmltext>
<tokenext>" You 're advocating the " emergent intelligence " model of AI , where intelligence " somehow " is created by the confluence of lots of data... [ snip ] ...In practice the degrees of freedom which unstructured data provides far exceed the capability of current ( and likely future ) computers .
" You sure about that ?
[ bluebrain.epfl.ch ] . They have already created a molecular level model of the mammalian neocortex and the expected date for completion of a full model of the mammalian brain is solely dependent on the amount of money thrown at it .
The model neocortex can already faithfully recreate patterns seen in fMRI scans .
If given the first part of a pattern it will acurately reproduce the rest of it .
The project is mainly geared toward medicine but they have also inserted the model into an artifical world in order to study it 's capacity for learning .
Depending on your choice of religion , the structure is the result of divine intervention or millions of years of chance and evolution .
When building AI systems , the problem has always been to find the appropriate structure or features .
The dualisim of Descates has been thouroughly debunked and I 'm sure you are aware that evolution is not a religion .
The mind does " somehow " emerge from the brain 's deterministic processing of a continuous avalanche of unstructured data .
Looking for the structure of mind is like looking for the structure of fog from within the fog bank .
This is why it 's called the hard problem of conciousness , the mistake most people make is that we need to solve that problem before we can create an artificial mind .
After all the pyramids were built with levers long before the greeks came along and explained why a lever " somehow " inreases the power of the person using it .
The real question is will we recognise an artificial mind if one emerges from an artificial brain .
It 's unlikely that such a mind would pass the turing test but we already have lots of examples of minds in our mammalian cousins that are also unable to pass the turing test .</tokentext>
<sentencetext>"You're advocating the "emergent intelligence" model of AI, where intelligence "somehow" is created by the confluence of lots of data...[snip]...In practice the degrees of freedom which unstructured data provides far exceed the capability of current (and likely future) computers.
"
 
You sure about that?
[bluebrain.epfl.ch]. They have already created a molecular level model of the mammalian neocortex and the expected date for completion of a full model of the mammalian brain is solely dependent on the amount of money thrown at it.
The model neocortex can already faithfully recreate patterns seen in fMRI scans.
If given the first part of a pattern it will acurately reproduce the rest of it.
The project is mainly geared toward medicine but they have also inserted the model into an artifical world in order to study it's capacity for learning.
Depending on your choice of religion, the structure is the result of divine intervention or millions of years of chance and evolution.
When building AI systems, the problem has always been to find the appropriate structure or features.
The dualisim of Descates has been thouroughly debunked and I'm sure you are aware that evolution is not a religion.
The mind does "somehow" emerge from the brain's deterministic processing of a continuous avalanche of unstructured data.
Looking for the structure of mind is like looking for the structure of fog from within the fog bank.
This is why it's called the hard problem of conciousness, the mistake most people make is that we need to solve that problem before we can create an artificial mind.
After all the pyramids were built with levers long before the greeks came along and explained why a lever "somehow" inreases the power of the person using it.
The real question is will we recognise an artificial mind if one emerges from an artificial brain.
It's unlikely that such a mind would pass the turing test but we already have lots of examples of minds in our mammalian cousins that are also unable to pass the turing test.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792510</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30794800</id>
	<title>Re:Non english text</title>
	<author>billius</author>
	<datestamp>1263648540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>From what I've heard, <a href="http://en.wikipedia.org/wiki/Language\_identification" title="wikipedia.org">language identification</a> [wikipedia.org] is a fairly well-understood problem in computational linguistics.  The language a given text is written in can generally be identified using a statistical approach using an n-gram method (often a <a href="http://en.wikipedia.org/wiki/Trigram" title="wikipedia.org">trigram</a> [wikipedia.org]).  Like the Wikipedia article states, there are problems given the fact that a lot of stuff on the web can have several languages on one page, but at least the bot should be able to fairly easily figure out if a page is written only in English.  There are even <a href="http://whatlanguageisthis.com/" title="whatlanguageisthis.com">javascript language identifiers</a> [whatlanguageisthis.com], so I think figure out what language something is written in is the least of their worries.</htmltext>
<tokenext>From what I 've heard , language identification [ wikipedia.org ] is a fairly well-understood problem in computational linguistics .
The language a given text is written in can generally be identified using a statistical approach using an n-gram method ( often a trigram [ wikipedia.org ] ) .
Like the Wikipedia article states , there are problems given the fact that a lot of stuff on the web can have several languages on one page , but at least the bot should be able to fairly easily figure out if a page is written only in English .
There are even javascript language identifiers [ whatlanguageisthis.com ] , so I think figure out what language something is written in is the least of their worries .</tokentext>
<sentencetext>From what I've heard, language identification [wikipedia.org] is a fairly well-understood problem in computational linguistics.
The language a given text is written in can generally be identified using a statistical approach using an n-gram method (often a trigram [wikipedia.org]).
Like the Wikipedia article states, there are problems given the fact that a lot of stuff on the web can have several languages on one page, but at least the bot should be able to fairly easily figure out if a page is written only in English.
There are even javascript language identifiers [whatlanguageisthis.com], so I think figure out what language something is written in is the least of their worries.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792404</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792368</id>
	<title>Finally, people are getting AI right.</title>
	<author>Umuri</author>
	<datestamp>1263673560000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>I've always been amazed that until recently, most work on AI has been focused as a preconstructed system that fits data into pathways while having some variation in thought abilities to let it expand it's model slightly.<br>They'd write the rules for the system and try to include most of the work on it, and then let see how good it does, with limited learning capabilities and still based on the original model.</p><p>I'm glad a lot of research is finally gearing more towards the path of having a small initial program, then feeding it data and letting it grow into it's own intelligence.<br>If you give it the ability to learn, then it'll learn itself the rest, rather than giving it functions that let it pretend to learn while fitting into a model.</p><p>And i know there have been research into this in the past, but it didn't really take off till the last decade or so, and i'm glad it has.<br>True, or at least somewhat competent AI, here we come.</p></htmltext>
<tokenext>I 've always been amazed that until recently , most work on AI has been focused as a preconstructed system that fits data into pathways while having some variation in thought abilities to let it expand it 's model slightly.They 'd write the rules for the system and try to include most of the work on it , and then let see how good it does , with limited learning capabilities and still based on the original model.I 'm glad a lot of research is finally gearing more towards the path of having a small initial program , then feeding it data and letting it grow into it 's own intelligence.If you give it the ability to learn , then it 'll learn itself the rest , rather than giving it functions that let it pretend to learn while fitting into a model.And i know there have been research into this in the past , but it did n't really take off till the last decade or so , and i 'm glad it has.True , or at least somewhat competent AI , here we come .</tokentext>
<sentencetext>I've always been amazed that until recently, most work on AI has been focused as a preconstructed system that fits data into pathways while having some variation in thought abilities to let it expand it's model slightly.They'd write the rules for the system and try to include most of the work on it, and then let see how good it does, with limited learning capabilities and still based on the original model.I'm glad a lot of research is finally gearing more towards the path of having a small initial program, then feeding it data and letting it grow into it's own intelligence.If you give it the ability to learn, then it'll learn itself the rest, rather than giving it functions that let it pretend to learn while fitting into a model.And i know there have been research into this in the past, but it didn't really take off till the last decade or so, and i'm glad it has.True, or at least somewhat competent AI, here we come.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793130</id>
	<title>Re:Non english text</title>
	<author>phantomfive</author>
	<datestamp>1263636600000</datestamp>
	<modclass>None</modclass>
	<modscore>2</modscore>
	<htmltext>(If you had read the article you would know) the machine is parsing English to create a database of relationships.  For example, if it sees the text, "there are many people, such as George Washington, Bill O'Reily, and Thomas Jefferson....." then it can infer that George Washington, Bill O'Reily, and Thomas Jefferson are all people.  Since a statement like this may be somewhat controversial, it uses bayesian classification to establish a probability of the truth of the statement.<br> <br>
Thus if it stumbles across a non-English text, it will not be able to create any relationships.</htmltext>
<tokenext>( If you had read the article you would know ) the machine is parsing English to create a database of relationships .
For example , if it sees the text , " there are many people , such as George Washington , Bill O'Reily , and Thomas Jefferson..... " then it can infer that George Washington , Bill O'Reily , and Thomas Jefferson are all people .
Since a statement like this may be somewhat controversial , it uses bayesian classification to establish a probability of the truth of the statement .
Thus if it stumbles across a non-English text , it will not be able to create any relationships .</tokentext>
<sentencetext>(If you had read the article you would know) the machine is parsing English to create a database of relationships.
For example, if it sees the text, "there are many people, such as George Washington, Bill O'Reily, and Thomas Jefferson....." then it can infer that George Washington, Bill O'Reily, and Thomas Jefferson are all people.
Since a statement like this may be somewhat controversial, it uses bayesian classification to establish a probability of the truth of the statement.
Thus if it stumbles across a non-English text, it will not be able to create any relationships.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792404</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30796086</id>
	<title>AI - Ignorance and overblown expectations... again</title>
	<author>Yogs</author>
	<datestamp>1263664020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I will say, I'm disappointed by the comments I've seen here on slashdot.</p><p>Best comment came from an anonymous coward about the pining for an "emergent" type system, the fact that we're not wired that way, and that while more power gives some more in the way of degrees of freedom, it doesn't mean that everything can be analyzed together... you have to have some way of focusing (and a pretty darn good one to prevent unimaginable problem blowup).</p><p>Bootstrapping works well when confined to a fixed arena with observable and unambiguous criteria for selection of behaviors or incorporating a piece of knowledge and observable and unambiguous criteria for judging the success thereof.  That is to say, a tight focus and goal directed behavior.  Without these and a tight feedback loop, the resulting system tends to disappoint.</p><p>Having as your scope, reading the web to gain an understanding of the world is um... just a bit outside that template for success.  While the big talk may be a pre-requisite for grant interest, I doubt have nearly as many illusions as the average slashdot reader.  I hope their work goes well, and I hope some of their techniques for extracting information from the web prove useful.  That said, it looked like their initial target was classification only.  Not trivial, but a very small part of the puzzle of intelligence to say the least, especially when you consider the fact that the classifications this thing will suck in will reflect mostly the sort of classifications that we don't take for granted.</p><p>And here I'll start reflecting my bias.  I am a former #$HumanCyclist (I did an internship about 10 years ago), because even though I am in some ways disappointed, I do think that the fact that they're actually building something (and along the way have been solving problems with it) and have been for a lot of years means that there's a lot to learn from them.</p><p>Among the things the Cyc project has shown, is exactly how important these sorts of unstated classifications turn out to be in the problem of doing even the most mundane things right.  But there's no point dwelling on that, because even assuming you have some impossibly large beautiful graph reflecting a really solid and well thought out classification of everything, from every angle (hahaha), you're nowhere.</p><p>Facts are fuel... the engine is the rules.  Reading those from free text is a very, very dicey proposition, both because the parsing is infinitely harder, and because much more so than facts, they're largely unstated and in terms of our own learning, inferred from examples.  You can set up probability matrixes or the like, but only if you know what you're evaluating for (how would you program "curiosity"?).  Even if you do get those matrices, reasoning with them directly is pretty much impracticable, so you have to have to make some arbitrary decisions about when you're confident enough to say you "know" something.  This is just really, really hard knowledge to get in any automated fashion.</p><p>Finally, for both facts and rules, the consequences of incorporating a poorly considered one can be quite dire, and there's no practiceable way (as the amount of knowledge grows) to know whether it's consistent with what is considered true to that point.</p><p>Getting even more slippery, there is no one context or frame to consider everything in.  This goes equally well for facts and rules.  You could try and split hairs and say that given enough antecedents, your facts and rules are solid.  However, as any kind of remotely practical matter, you need a way of accumulating and organizing these antecedents, and that's true from both from an technical (engine execution), and practical (reasoning and learning ease) perspective.</p><p>Oh, and as a minor matter, languages are difficult enough from a syntactic dimension, and the symantics of it (in order to understand a statement, you have to understand the ones prior, the context or framing that may have switched, the built up assumptions that maybe can be discarded, maybe not, etc.</p></htmltext>
<tokenext>I will say , I 'm disappointed by the comments I 've seen here on slashdot.Best comment came from an anonymous coward about the pining for an " emergent " type system , the fact that we 're not wired that way , and that while more power gives some more in the way of degrees of freedom , it does n't mean that everything can be analyzed together... you have to have some way of focusing ( and a pretty darn good one to prevent unimaginable problem blowup ) .Bootstrapping works well when confined to a fixed arena with observable and unambiguous criteria for selection of behaviors or incorporating a piece of knowledge and observable and unambiguous criteria for judging the success thereof .
That is to say , a tight focus and goal directed behavior .
Without these and a tight feedback loop , the resulting system tends to disappoint.Having as your scope , reading the web to gain an understanding of the world is um... just a bit outside that template for success .
While the big talk may be a pre-requisite for grant interest , I doubt have nearly as many illusions as the average slashdot reader .
I hope their work goes well , and I hope some of their techniques for extracting information from the web prove useful .
That said , it looked like their initial target was classification only .
Not trivial , but a very small part of the puzzle of intelligence to say the least , especially when you consider the fact that the classifications this thing will suck in will reflect mostly the sort of classifications that we do n't take for granted.And here I 'll start reflecting my bias .
I am a former # $ HumanCyclist ( I did an internship about 10 years ago ) , because even though I am in some ways disappointed , I do think that the fact that they 're actually building something ( and along the way have been solving problems with it ) and have been for a lot of years means that there 's a lot to learn from them.Among the things the Cyc project has shown , is exactly how important these sorts of unstated classifications turn out to be in the problem of doing even the most mundane things right .
But there 's no point dwelling on that , because even assuming you have some impossibly large beautiful graph reflecting a really solid and well thought out classification of everything , from every angle ( hahaha ) , you 're nowhere.Facts are fuel... the engine is the rules .
Reading those from free text is a very , very dicey proposition , both because the parsing is infinitely harder , and because much more so than facts , they 're largely unstated and in terms of our own learning , inferred from examples .
You can set up probability matrixes or the like , but only if you know what you 're evaluating for ( how would you program " curiosity " ? ) .
Even if you do get those matrices , reasoning with them directly is pretty much impracticable , so you have to have to make some arbitrary decisions about when you 're confident enough to say you " know " something .
This is just really , really hard knowledge to get in any automated fashion.Finally , for both facts and rules , the consequences of incorporating a poorly considered one can be quite dire , and there 's no practiceable way ( as the amount of knowledge grows ) to know whether it 's consistent with what is considered true to that point.Getting even more slippery , there is no one context or frame to consider everything in .
This goes equally well for facts and rules .
You could try and split hairs and say that given enough antecedents , your facts and rules are solid .
However , as any kind of remotely practical matter , you need a way of accumulating and organizing these antecedents , and that 's true from both from an technical ( engine execution ) , and practical ( reasoning and learning ease ) perspective.Oh , and as a minor matter , languages are difficult enough from a syntactic dimension , and the symantics of it ( in order to understand a statement , you have to understand the ones prior , the context or framing that may have switched , the built up assumptions that maybe can be discarded , maybe not , etc .</tokentext>
<sentencetext>I will say, I'm disappointed by the comments I've seen here on slashdot.Best comment came from an anonymous coward about the pining for an "emergent" type system, the fact that we're not wired that way, and that while more power gives some more in the way of degrees of freedom, it doesn't mean that everything can be analyzed together... you have to have some way of focusing (and a pretty darn good one to prevent unimaginable problem blowup).Bootstrapping works well when confined to a fixed arena with observable and unambiguous criteria for selection of behaviors or incorporating a piece of knowledge and observable and unambiguous criteria for judging the success thereof.
That is to say, a tight focus and goal directed behavior.
Without these and a tight feedback loop, the resulting system tends to disappoint.Having as your scope, reading the web to gain an understanding of the world is um... just a bit outside that template for success.
While the big talk may be a pre-requisite for grant interest, I doubt have nearly as many illusions as the average slashdot reader.
I hope their work goes well, and I hope some of their techniques for extracting information from the web prove useful.
That said, it looked like their initial target was classification only.
Not trivial, but a very small part of the puzzle of intelligence to say the least, especially when you consider the fact that the classifications this thing will suck in will reflect mostly the sort of classifications that we don't take for granted.And here I'll start reflecting my bias.
I am a former #$HumanCyclist (I did an internship about 10 years ago), because even though I am in some ways disappointed, I do think that the fact that they're actually building something (and along the way have been solving problems with it) and have been for a lot of years means that there's a lot to learn from them.Among the things the Cyc project has shown, is exactly how important these sorts of unstated classifications turn out to be in the problem of doing even the most mundane things right.
But there's no point dwelling on that, because even assuming you have some impossibly large beautiful graph reflecting a really solid and well thought out classification of everything, from every angle (hahaha), you're nowhere.Facts are fuel... the engine is the rules.
Reading those from free text is a very, very dicey proposition, both because the parsing is infinitely harder, and because much more so than facts, they're largely unstated and in terms of our own learning, inferred from examples.
You can set up probability matrixes or the like, but only if you know what you're evaluating for (how would you program "curiosity"?).
Even if you do get those matrices, reasoning with them directly is pretty much impracticable, so you have to have to make some arbitrary decisions about when you're confident enough to say you "know" something.
This is just really, really hard knowledge to get in any automated fashion.Finally, for both facts and rules, the consequences of incorporating a poorly considered one can be quite dire, and there's no practiceable way (as the amount of knowledge grows) to know whether it's consistent with what is considered true to that point.Getting even more slippery, there is no one context or frame to consider everything in.
This goes equally well for facts and rules.
You could try and split hairs and say that given enough antecedents, your facts and rules are solid.
However, as any kind of remotely practical matter, you need a way of accumulating and organizing these antecedents, and that's true from both from an technical (engine execution), and practical (reasoning and learning ease) perspective.Oh, and as a minor matter, languages are difficult enough from a syntactic dimension, and the symantics of it (in order to understand a statement, you have to understand the ones prior, the context or framing that may have switched, the built up assumptions that maybe can be discarded, maybe not, etc.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792462</id>
	<title>Re:Uh oh...</title>
	<author>Anonymous</author>
	<datestamp>1263674160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>4chan.  [shudder]</p></htmltext>
<tokenext>4chan .
[ shudder ]</tokentext>
<sentencetext>4chan.
[shudder]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792326</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792344</id>
	<title>First words learned</title>
	<author>Anonymous</author>
	<datestamp>1263673380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>"Frosty Pist"  , if it reads slash dot</p></htmltext>
<tokenext>" Frosty Pist " , if it reads slash dot</tokentext>
<sentencetext>"Frosty Pist"  , if it reads slash dot</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792402</id>
	<title>do...</title>
	<author>Anonymous</author>
	<datestamp>1263673740000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Does this mean somebody forgot to put a "break" in the loop?</p></htmltext>
<tokenext>Does this mean somebody forgot to put a " break " in the loop ?</tokentext>
<sentencetext>Does this mean somebody forgot to put a "break" in the loop?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792846</id>
	<title>42?</title>
	<author>JWSmythe</author>
	<datestamp>1263634080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>
&nbsp; &nbsp; How come every time I ask Nell what the answer is to life, all it responds with is "42".   When I ask what 42 means, it tells me that I'll need a bigger computer.</p></htmltext>
<tokenext>    How come every time I ask Nell what the answer is to life , all it responds with is " 42 " .
When I ask what 42 means , it tells me that I 'll need a bigger computer .</tokentext>
<sentencetext>
    How come every time I ask Nell what the answer is to life, all it responds with is "42".
When I ask what 42 means, it tells me that I'll need a bigger computer.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30797342</id>
	<title>Local optima</title>
	<author>xcut</author>
	<datestamp>1263731100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>And how will they determine if this gets stuck in some local optimum for certain concepts, and thus stops to learn anything relevant at all about any one given concept or topic?

The report is low on details and high on hype. There are no current algorithms that don't require heavy parameter tuning and constant monitoring to get right. Switching one on for a few years and hoping does not strike me as an exciting story.</htmltext>
<tokenext>And how will they determine if this gets stuck in some local optimum for certain concepts , and thus stops to learn anything relevant at all about any one given concept or topic ?
The report is low on details and high on hype .
There are no current algorithms that do n't require heavy parameter tuning and constant monitoring to get right .
Switching one on for a few years and hoping does not strike me as an exciting story .</tokentext>
<sentencetext>And how will they determine if this gets stuck in some local optimum for certain concepts, and thus stops to learn anything relevant at all about any one given concept or topic?
The report is low on details and high on hype.
There are no current algorithms that don't require heavy parameter tuning and constant monitoring to get right.
Switching one on for a few years and hoping does not strike me as an exciting story.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30861606</id>
	<title>The Infancy of P.1</title>
	<author>jtgd</author>
	<datestamp>1264186620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>...just add virus to make him mobile.</htmltext>
<tokenext>...just add virus to make him mobile .</tokentext>
<sentencetext>...just add virus to make him mobile.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792580</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792588</id>
	<title>already been done</title>
	<author>phantomfive</author>
	<datestamp>1263675300000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p><div class="quote"><p> There is simply no existing database to tell computers that "cups" are kinds of "dishware" and that "calculators" are types of "electronics."  NELL could create a massive database like this, which would be extremely valuable to other AI researchers.</p></div><p>This is what they are trying to do, based on information they glean from the internet.  It's already been done, <a href="http://en.wikipedia.org/wiki/Cyc" title="wikipedia.org">with Cyc</a> [wikipedia.org].  The major difference seems to be that Cyc was built by hand, and cost a lot more.  It will be interesting to see if this experiment results in a higher or lower quality database.<br> <br>
Also, I question their assertion that it would be extremely valuable to other AI researchers.  Cyc has been around for a while now, and nothing really exciting has come of it.  I'm not sure why this would be any different.</p></div>
	</htmltext>
<tokenext>There is simply no existing database to tell computers that " cups " are kinds of " dishware " and that " calculators " are types of " electronics .
" NELL could create a massive database like this , which would be extremely valuable to other AI researchers.This is what they are trying to do , based on information they glean from the internet .
It 's already been done , with Cyc [ wikipedia.org ] .
The major difference seems to be that Cyc was built by hand , and cost a lot more .
It will be interesting to see if this experiment results in a higher or lower quality database .
Also , I question their assertion that it would be extremely valuable to other AI researchers .
Cyc has been around for a while now , and nothing really exciting has come of it .
I 'm not sure why this would be any different .</tokentext>
<sentencetext> There is simply no existing database to tell computers that "cups" are kinds of "dishware" and that "calculators" are types of "electronics.
"  NELL could create a massive database like this, which would be extremely valuable to other AI researchers.This is what they are trying to do, based on information they glean from the internet.
It's already been done, with Cyc [wikipedia.org].
The major difference seems to be that Cyc was built by hand, and cost a lot more.
It will be interesting to see if this experiment results in a higher or lower quality database.
Also, I question their assertion that it would be extremely valuable to other AI researchers.
Cyc has been around for a while now, and nothing really exciting has come of it.
I'm not sure why this would be any different.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792540</id>
	<title>Pruning</title>
	<author>NonSequor</author>
	<datestamp>1263674760000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>In general I find that the quality of a data set tends to be determined by the number (and quality) of man hours that go into maintaining it. Every database accumulates spurious entries and if they aren't removed the data loses it's integrity.</p><p>I'm very skeptical of the idea that this thing is going to keep taking input forever and accumulate a usable data set unless an army of student labor is press-ganged to prune it.</p></htmltext>
<tokenext>In general I find that the quality of a data set tends to be determined by the number ( and quality ) of man hours that go into maintaining it .
Every database accumulates spurious entries and if they are n't removed the data loses it 's integrity.I 'm very skeptical of the idea that this thing is going to keep taking input forever and accumulate a usable data set unless an army of student labor is press-ganged to prune it .</tokentext>
<sentencetext>In general I find that the quality of a data set tends to be determined by the number (and quality) of man hours that go into maintaining it.
Every database accumulates spurious entries and if they aren't removed the data loses it's integrity.I'm very skeptical of the idea that this thing is going to keep taking input forever and accumulate a usable data set unless an army of student labor is press-ganged to prune it.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792648</id>
	<title>The web may have been a poor choice</title>
	<author>Anonymous</author>
	<datestamp>1263632580000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>So far most of the words it's learned are related to various sex acts.</p></htmltext>
<tokenext>So far most of the words it 's learned are related to various sex acts .</tokentext>
<sentencetext>So far most of the words it's learned are related to various sex acts.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793096</id>
	<title>Wikipedia</title>
	<author>the person standing</author>
	<datestamp>1263636300000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext>Let it read wikipedia - not get it poisoned by twitter etc!</htmltext>
<tokenext>Let it read wikipedia - not get it poisoned by twitter etc !</tokentext>
<sentencetext>Let it read wikipedia - not get it poisoned by twitter etc!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30796710</id>
	<title>It's not that the program couldn't stop running; t</title>
	<author>LS</author>
	<datestamp>1263719700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><nobr> <wbr></nobr></p><div class="quote"><p>.... program that never dies. It runs continuously<nobr> <wbr></nobr>.....  It's not that the program couldn't stop running; the idea is that there's no fixed end-point</p></div><p>Wow I didn't even think that was physically possible!  Maybe google should borrow this tech for their web crawlers.  Must be a pain to restart them every day...</p></div>
	</htmltext>
<tokenext>.... program that never dies .
It runs continuously ..... It 's not that the program could n't stop running ; the idea is that there 's no fixed end-pointWow I did n't even think that was physically possible !
Maybe google should borrow this tech for their web crawlers .
Must be a pain to restart them every day.. .</tokentext>
<sentencetext> .... program that never dies.
It runs continuously .....  It's not that the program couldn't stop running; the idea is that there's no fixed end-pointWow I didn't even think that was physically possible!
Maybe google should borrow this tech for their web crawlers.
Must be a pain to restart them every day...
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793086</id>
	<title>Re:Finally, people are getting AI right.</title>
	<author>phantomfive</author>
	<datestamp>1263636240000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext>AI history has gone back and forth between pre-constructed systems and models that expand. One of the earliest successful AI experiments was a checkers program that taught itself to play by playing against itself, and quickly got very strong. <br> <br>
Building a giant database of knowledge hasn't been possible for very long, because computers didn't have very much memory.  When system capabilities first reached the capacity to do so, it had to be constructed from hand because there was no online repository of information to extract data from: the internet just wasn't very big.  That particular project was known as Cyc, and it cost a lot of money.<br> <br>
Since that time, the internet has grown and there are massive amounts of information available.  It will be interesting to see the resultant quality of this database, to see if the information on the internet is good enough to make it usable.</htmltext>
<tokenext>AI history has gone back and forth between pre-constructed systems and models that expand .
One of the earliest successful AI experiments was a checkers program that taught itself to play by playing against itself , and quickly got very strong .
Building a giant database of knowledge has n't been possible for very long , because computers did n't have very much memory .
When system capabilities first reached the capacity to do so , it had to be constructed from hand because there was no online repository of information to extract data from : the internet just was n't very big .
That particular project was known as Cyc , and it cost a lot of money .
Since that time , the internet has grown and there are massive amounts of information available .
It will be interesting to see the resultant quality of this database , to see if the information on the internet is good enough to make it usable .</tokentext>
<sentencetext>AI history has gone back and forth between pre-constructed systems and models that expand.
One of the earliest successful AI experiments was a checkers program that taught itself to play by playing against itself, and quickly got very strong.
Building a giant database of knowledge hasn't been possible for very long, because computers didn't have very much memory.
When system capabilities first reached the capacity to do so, it had to be constructed from hand because there was no online repository of information to extract data from: the internet just wasn't very big.
That particular project was known as Cyc, and it cost a lot of money.
Since that time, the internet has grown and there are massive amounts of information available.
It will be interesting to see the resultant quality of this database, to see if the information on the internet is good enough to make it usable.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792368</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30801480</id>
	<title>Ghost?</title>
	<author>Anonymous</author>
	<datestamp>1263724860000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>When will this thing build a ghost?</p></htmltext>
<tokenext>When will this thing build a ghost ?</tokentext>
<sentencetext>When will this thing build a ghost?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793578</id>
	<title>Re:Finally, people are getting AI right.</title>
	<author>DMUTPeregrine</author>
	<datestamp>1263640140000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext>The obligatory classic AI Koan:<blockquote><div><p>In the days when Sussman was a novice Minsky once came to him as he sat hacking at the PDP-6. "What are you doing?", asked Minsky. "I am training a randomly wired neural net to play Tic-Tac-Toe." "Why is the net wired randomly?", asked Minsky. "I do not want it to have any preconceptions of how to play." Minsky shut his eyes. "Why do you close your eyes?", Sussman asked his teacher. "So the room will be empty." At that moment, Sussman was enlightened.</p></div> </blockquote></div>
	</htmltext>
<tokenext>The obligatory classic AI Koan : In the days when Sussman was a novice Minsky once came to him as he sat hacking at the PDP-6 .
" What are you doing ?
" , asked Minsky .
" I am training a randomly wired neural net to play Tic-Tac-Toe .
" " Why is the net wired randomly ?
" , asked Minsky .
" I do not want it to have any preconceptions of how to play .
" Minsky shut his eyes .
" Why do you close your eyes ?
" , Sussman asked his teacher .
" So the room will be empty .
" At that moment , Sussman was enlightened .</tokentext>
<sentencetext>The obligatory classic AI Koan:In the days when Sussman was a novice Minsky once came to him as he sat hacking at the PDP-6.
"What are you doing?
", asked Minsky.
"I am training a randomly wired neural net to play Tic-Tac-Toe.
" "Why is the net wired randomly?
", asked Minsky.
"I do not want it to have any preconceptions of how to play.
" Minsky shut his eyes.
"Why do you close your eyes?
", Sussman asked his teacher.
"So the room will be empty.
" At that moment, Sussman was enlightened. 
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792510</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792938</id>
	<title>It all boils down to three words.</title>
	<author>Anonymous</author>
	<datestamp>1263634980000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>KILL. ALL. HUMANS.</p></htmltext>
<tokenext>KILL .
ALL. HUMANS .</tokentext>
<sentencetext>KILL.
ALL. HUMANS.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792374</id>
	<title>Machine learning algorithms</title>
	<author>sakdoctor</author>
	<datestamp>1263673560000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>Only as good as current machine learning algorithms.<br>So not very.</p></htmltext>
<tokenext>Only as good as current machine learning algorithms.So not very .</tokentext>
<sentencetext>Only as good as current machine learning algorithms.So not very.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30795080</id>
	<title>Re:Finally, people are getting AI right.</title>
	<author>umghhh</author>
	<datestamp>1263651720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>What is the point of having an intelligent interlocutor - I mean the answer is known (42) and the rest is just plain old blathering about things - something I can do with my wife (if we were still talking with each other that is) so in fact this is just an exercise in futility. But of course there are money to be made there I guess - all this call center folk can be then optimized out of existence (sold to slavery to Zamunda, Kidneys sold to some reach oil country etc) so maybe it makes sense after all?</htmltext>
<tokenext>What is the point of having an intelligent interlocutor - I mean the answer is known ( 42 ) and the rest is just plain old blathering about things - something I can do with my wife ( if we were still talking with each other that is ) so in fact this is just an exercise in futility .
But of course there are money to be made there I guess - all this call center folk can be then optimized out of existence ( sold to slavery to Zamunda , Kidneys sold to some reach oil country etc ) so maybe it makes sense after all ?</tokentext>
<sentencetext>What is the point of having an intelligent interlocutor - I mean the answer is known (42) and the rest is just plain old blathering about things - something I can do with my wife (if we were still talking with each other that is) so in fact this is just an exercise in futility.
But of course there are money to be made there I guess - all this call center folk can be then optimized out of existence (sold to slavery to Zamunda, Kidneys sold to some reach oil country etc) so maybe it makes sense after all?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792368</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30794358</id>
	<title>Wait a minute</title>
	<author>marqs</author>
	<datestamp>1263645120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>This is what I think happend.
<br>
<br>
Developers: We have a problem with the application.  there seem to be an infinite loop that prevent it from finishing.<br>
Marketing: So, that's the programs main feature, is it not?</htmltext>
<tokenext>This is what I think happend .
Developers : We have a problem with the application .
there seem to be an infinite loop that prevent it from finishing .
Marketing : So , that 's the programs main feature , is it not ?</tokentext>
<sentencetext>This is what I think happend.
Developers: We have a problem with the application.
there seem to be an infinite loop that prevent it from finishing.
Marketing: So, that's the programs main feature, is it not?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792446</id>
	<title>Once this thing hits Encyclopedia Dramatica...</title>
	<author>xenophrak</author>
	<datestamp>1263674040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>...it will forever be stuck at the level of a retarded 8 year old. Or the level of a normal 4-chan user.</p></htmltext>
<tokenext>...it will forever be stuck at the level of a retarded 8 year old .
Or the level of a normal 4-chan user .</tokentext>
<sentencetext>...it will forever be stuck at the level of a retarded 8 year old.
Or the level of a normal 4-chan user.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793126</id>
	<title>Re:Once this thing hits Encyclopedia Dramatica...</title>
	<author>MooUK</author>
	<datestamp>1263636480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Same thing.</p></htmltext>
<tokenext>Same thing .</tokentext>
<sentencetext>Same thing.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792446</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30795010</id>
	<title>Re:already been done</title>
	<author>Anonymous</author>
	<datestamp>1263650940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>As for the databases, you may want to check out dbpedia.org. An interesting well-funded project about using and combining such data would be found on larkc.eu.</p></htmltext>
<tokenext>As for the databases , you may want to check out dbpedia.org .
An interesting well-funded project about using and combining such data would be found on larkc.eu .</tokentext>
<sentencetext>As for the databases, you may want to check out dbpedia.org.
An interesting well-funded project about using and combining such data would be found on larkc.eu.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792588</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30796892</id>
	<title>The Probable Outcome ...</title>
	<author>foobsr</author>
	<datestamp>1263723240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>... may be a site resembling <a href="http://www.20q.net/" title="20q.net">http://www.20q.net/</a> [20q.net] , which started as a never ending story (neural net) as well.
<br> <br>
<a href="http://en.wikipedia.org/wiki/20Q" title="wikipedia.org">Quote</a> [wikipedia.org]: "The 20Q was created in 1988 as an experiment in artificial intelligence (AI) The principle is that the player thinks of something and the 20Q artificial intelligence asks a series of questions before guessing what the player is thinking. This artificial intelligence learns on its own with the information relayed back to the players who interact with it, and is not programmed. The player can answer these questions with: Yes, No, Unknown, or Sometimes. The experiment is based on the classic word game of Twenty Questions, and on the computer game "Animals," popular in the early 1970s, which used a somewhat simpler method to guess an animal."
<br> <br>
CC.</div>
	</htmltext>
<tokenext>... may be a site resembling http : //www.20q.net/ [ 20q.net ] , which started as a never ending story ( neural net ) as well .
Quote [ wikipedia.org ] : " The 20Q was created in 1988 as an experiment in artificial intelligence ( AI ) The principle is that the player thinks of something and the 20Q artificial intelligence asks a series of questions before guessing what the player is thinking .
This artificial intelligence learns on its own with the information relayed back to the players who interact with it , and is not programmed .
The player can answer these questions with : Yes , No , Unknown , or Sometimes .
The experiment is based on the classic word game of Twenty Questions , and on the computer game " Animals , " popular in the early 1970s , which used a somewhat simpler method to guess an animal .
" CC .</tokentext>
<sentencetext>... may be a site resembling http://www.20q.net/ [20q.net] , which started as a never ending story (neural net) as well.
Quote [wikipedia.org]: "The 20Q was created in 1988 as an experiment in artificial intelligence (AI) The principle is that the player thinks of something and the 20Q artificial intelligence asks a series of questions before guessing what the player is thinking.
This artificial intelligence learns on its own with the information relayed back to the players who interact with it, and is not programmed.
The player can answer these questions with: Yes, No, Unknown, or Sometimes.
The experiment is based on the classic word game of Twenty Questions, and on the computer game "Animals," popular in the early 1970s, which used a somewhat simpler method to guess an animal.
"
 
CC.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30794810</id>
	<title>If only they could train it without the web</title>
	<author>ClosedSource</author>
	<datestamp>1263648660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Perhaps if there were a book in electronic form that had all English words in it perhaps with a definition of each word.</p></htmltext>
<tokenext>Perhaps if there were a book in electronic form that had all English words in it perhaps with a definition of each word .</tokentext>
<sentencetext>Perhaps if there were a book in electronic form that had all English words in it perhaps with a definition of each word.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793310</id>
	<title>ODG</title>
	<author>Anonymous</author>
	<datestamp>1263638340000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Oh dear god, this thing will be the ULTIMATE grammar Nazi!!!!</p></htmltext>
<tokenext>Oh dear god , this thing will be the ULTIMATE grammar Nazi ! ! !
!</tokentext>
<sentencetext>Oh dear god, this thing will be the ULTIMATE grammar Nazi!!!
!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30795214</id>
	<title>Re:Once this thing hits Encyclopedia Dramatica...</title>
	<author>Anonymous</author>
	<datestamp>1263653160000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>But you repeat yourself</p></htmltext>
<tokenext>But you repeat yourself</tokentext>
<sentencetext>But you repeat yourself</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792446</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30797384</id>
	<title>Re:The web: What a great source of information</title>
	<author>u38cg</author>
	<datestamp>1263731820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Children routinely learn perfect English with a complete generative grammar from corrupt sources.  Indeed, if you put children in an environment where nobody speaks a complete language, they will spontaneously evolve a grammatically complete language.  So it is possible (though I'm nt saying it will be easy...)</htmltext>
<tokenext>Children routinely learn perfect English with a complete generative grammar from corrupt sources .
Indeed , if you put children in an environment where nobody speaks a complete language , they will spontaneously evolve a grammatically complete language .
So it is possible ( though I 'm nt saying it will be easy... )</tokentext>
<sentencetext>Children routinely learn perfect English with a complete generative grammar from corrupt sources.
Indeed, if you put children in an environment where nobody speaks a complete language, they will spontaneously evolve a grammatically complete language.
So it is possible (though I'm nt saying it will be easy...)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792560</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792524</id>
	<title>Test it</title>
	<author>Jorl17</author>
	<datestamp>1263674640000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Show it only Porn-alike text. Let's see what it learns...</htmltext>
<tokenext>Show it only Porn-alike text .
Let 's see what it learns.. .</tokentext>
<sentencetext>Show it only Porn-alike text.
Let's see what it learns...</sentencetext>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793624
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792374
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30796434
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792326
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30794366
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792326
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30796210
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793376
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792326
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792462
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792326
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30797384
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792560
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792964
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792588
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30861606
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792580
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793130
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792404
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30795010
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792588
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793136
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792446
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30796788
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792374
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793632
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792540
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30795650
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792354
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30794500
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792394
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792700
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792368
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30795696
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792394
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30795214
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792446
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792778
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792510
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792368
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793086
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792368
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793126
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792446
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30795080
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792368
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30796488
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792510
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792368
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30794800
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792404
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792940
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792560
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30796902
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792404
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_16_194238_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793578
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792510
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792368
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_16_194238.16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30797342
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_16_194238.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792332
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_16_194238.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792354
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30795650
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_16_194238.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792326
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30794366
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792462
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30796434
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793376
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30796210
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_16_194238.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792374
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793624
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30796788
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_16_194238.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792560
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792940
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30797384
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_16_194238.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792402
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_16_194238.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30796086
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_16_194238.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792394
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30795696
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30794500
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_16_194238.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30794810
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_16_194238.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792580
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30861606
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_16_194238.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792648
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_16_194238.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792540
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793632
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_16_194238.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792522
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_16_194238.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792404
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793130
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30796902
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30794800
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_16_194238.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792446
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793136
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793126
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30795214
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_16_194238.17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792588
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30795010
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792964
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_16_194238.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792368
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793086
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792510
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792778
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30796488
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30793578
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792700
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30795080
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_16_194238.18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_16_194238.30792530
</commentlist>
</conversation>
