<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_11_26_0322227</id>
	<title>Tag Images With Your Mind</title>
	<author>samzenpus</author>
	<datestamp>1259243400000</datestamp>
	<htmltext>blee37 writes <i>"Researchers at Microsoft have invented a system for <a href="http://scitedaily.wordpress.com/2009/11/25/tag-images-with-your-mind/">tagging images by reading brain scans</a> from an electroencephalograph (EEG).  Tagging images is an important task because many images on the web are unlabeled and have no semantic information.  This new method allows an appropriate tag to be generated by an AI algorithm interpreting the EEG scan of a person's brain while they view an image.  The person need only view the image for as little as 500 ms.  Other current methods for generating tags include flat out paying people to do it manually, putting the task on Amazon Mechanical Turk, or using Google Image Labeler."</i></htmltext>
<tokenext>blee37 writes " Researchers at Microsoft have invented a system for tagging images by reading brain scans from an electroencephalograph ( EEG ) .
Tagging images is an important task because many images on the web are unlabeled and have no semantic information .
This new method allows an appropriate tag to be generated by an AI algorithm interpreting the EEG scan of a person 's brain while they view an image .
The person need only view the image for as little as 500 ms. Other current methods for generating tags include flat out paying people to do it manually , putting the task on Amazon Mechanical Turk , or using Google Image Labeler .
"</tokentext>
<sentencetext>blee37 writes "Researchers at Microsoft have invented a system for tagging images by reading brain scans from an electroencephalograph (EEG).
Tagging images is an important task because many images on the web are unlabeled and have no semantic information.
This new method allows an appropriate tag to be generated by an AI algorithm interpreting the EEG scan of a person's brain while they view an image.
The person need only view the image for as little as 500 ms.  Other current methods for generating tags include flat out paying people to do it manually, putting the task on Amazon Mechanical Turk, or using Google Image Labeler.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236630</id>
	<title>And the world gets creepier.</title>
	<author>billsayswow</author>
	<datestamp>1259248260000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Now you'll be able to see every bizarre thing that at least someone in the world finds attractive. "[Picture of the Grand Canyon] 17\% of viewers found this image 'Sexy'"</htmltext>
<tokenext>Now you 'll be able to see every bizarre thing that at least someone in the world finds attractive .
" [ Picture of the Grand Canyon ] 17 \ % of viewers found this image 'Sexy ' "</tokentext>
<sentencetext>Now you'll be able to see every bizarre thing that at least someone in the world finds attractive.
"[Picture of the Grand Canyon] 17\% of viewers found this image 'Sexy'"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236658</id>
	<title>Oh Microsoft...</title>
	<author>yttrstein</author>
	<datestamp>1259248500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>An EEG is not the same thing as a "brain scan".  An EEG is an analog point to point system which is very good at "reading" the parts of the generalized electrical field that reaches the scalp from the brain.  Using EEG output to control stuff is a fun sideline which is almost exactly as old as EEG technology itself.<br><br>It doesn't work very well, and it very probably never will.  The variance in electrical activity in the brain between two people receiving the same sensory input is, in an average way, too great to be useful.<br><br>Once someone comes up with a way to shrink an MRI machine to the size of a quarter that you just stick to your forehead and talks bluetooth to all your devices, then we'll be ok.</htmltext>
<tokenext>An EEG is not the same thing as a " brain scan " .
An EEG is an analog point to point system which is very good at " reading " the parts of the generalized electrical field that reaches the scalp from the brain .
Using EEG output to control stuff is a fun sideline which is almost exactly as old as EEG technology itself.It does n't work very well , and it very probably never will .
The variance in electrical activity in the brain between two people receiving the same sensory input is , in an average way , too great to be useful.Once someone comes up with a way to shrink an MRI machine to the size of a quarter that you just stick to your forehead and talks bluetooth to all your devices , then we 'll be ok .</tokentext>
<sentencetext>An EEG is not the same thing as a "brain scan".
An EEG is an analog point to point system which is very good at "reading" the parts of the generalized electrical field that reaches the scalp from the brain.
Using EEG output to control stuff is a fun sideline which is almost exactly as old as EEG technology itself.It doesn't work very well, and it very probably never will.
The variance in electrical activity in the brain between two people receiving the same sensory input is, in an average way, too great to be useful.Once someone comes up with a way to shrink an MRI machine to the size of a quarter that you just stick to your forehead and talks bluetooth to all your devices, then we'll be ok.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236536</id>
	<title>What can go wrong?</title>
	<author>Anonymous</author>
	<datestamp>1259247540000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext><p>No really what can go wrong with using your inconscient animal nature to tag every photo with a (decent) girl in bikini as "To Do"</p></htmltext>
<tokenext>No really what can go wrong with using your inconscient animal nature to tag every photo with a ( decent ) girl in bikini as " To Do "</tokentext>
<sentencetext>No really what can go wrong with using your inconscient animal nature to tag every photo with a (decent) girl in bikini as "To Do"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236804</id>
	<title>First post!</title>
	<author>Hemi Rodner</author>
	<datestamp>1259249820000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><p>I rock baby</p></htmltext>
<tokenext>I rock baby</tokentext>
<sentencetext>I rock baby</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236798</id>
	<title>I was tagging that image, honest!</title>
	<author>noidentity</author>
	<datestamp>1259249760000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>Honey, I wasn't looking at her breasts; I was just tagging the image using Microsoft's new mind tagging, honest!</htmltext>
<tokenext>Honey , I was n't looking at her breasts ; I was just tagging the image using Microsoft 's new mind tagging , honest !</tokentext>
<sentencetext>Honey, I wasn't looking at her breasts; I was just tagging the image using Microsoft's new mind tagging, honest!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236592</id>
	<title>it works great!</title>
	<author>Anonymous</author>
	<datestamp>1259248020000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>i just tagged this story with my mind<br>

double the killer delete select all</htmltext>
<tokenext>i just tagged this story with my mind double the killer delete select all</tokentext>
<sentencetext>i just tagged this story with my mind

double the killer delete select all</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236966</id>
	<title>Oh that what the internet needs more of!</title>
	<author>Interoperable</author>
	<datestamp>1259250900000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>Tags.</htmltext>
<tokenext>Tags .</tokentext>
<sentencetext>Tags.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30238508</id>
	<title>Re:Fun and Easy to Use</title>
	<author>DynaSoar</author>
	<datestamp>1259262960000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p><div class="quote"><p>I used to work in an EEG lab, and I can tell you that those caps are pretty uncomfortable to wear.  After they put them on, you stick these little needles into the leads and squirt conductive goop on your scalp. It takes a few cycles to rinse that stuff out too.</p></div><p>Smitty, we've come a long way from those caps. There are now "caps" that are essentially nets of elastic cord with plastic cups containing pieces of sponge in them, the electrodes embedded in the sponge. Dip it in mild salt water for conduction, shake it out so there's no drips running together bridging the electrode sites, and pull it on. I could get good signal on 128 channels in less than 10 minutes from the time they walked in to data collection start.</p><p>There is also a European company selling a similar get up, but the preamps are built into the cups on the net, making impedance matching irrelevant and signal balancing automatic on the fly. These are so stable that they can be used ambulatory.</p><p>And nobody ever has to get goop or glue stuck on/into them any more.</p></div>
	</htmltext>
<tokenext>I used to work in an EEG lab , and I can tell you that those caps are pretty uncomfortable to wear .
After they put them on , you stick these little needles into the leads and squirt conductive goop on your scalp .
It takes a few cycles to rinse that stuff out too.Smitty , we 've come a long way from those caps .
There are now " caps " that are essentially nets of elastic cord with plastic cups containing pieces of sponge in them , the electrodes embedded in the sponge .
Dip it in mild salt water for conduction , shake it out so there 's no drips running together bridging the electrode sites , and pull it on .
I could get good signal on 128 channels in less than 10 minutes from the time they walked in to data collection start.There is also a European company selling a similar get up , but the preamps are built into the cups on the net , making impedance matching irrelevant and signal balancing automatic on the fly .
These are so stable that they can be used ambulatory.And nobody ever has to get goop or glue stuck on/into them any more .</tokentext>
<sentencetext>I used to work in an EEG lab, and I can tell you that those caps are pretty uncomfortable to wear.
After they put them on, you stick these little needles into the leads and squirt conductive goop on your scalp.
It takes a few cycles to rinse that stuff out too.Smitty, we've come a long way from those caps.
There are now "caps" that are essentially nets of elastic cord with plastic cups containing pieces of sponge in them, the electrodes embedded in the sponge.
Dip it in mild salt water for conduction, shake it out so there's no drips running together bridging the electrode sites, and pull it on.
I could get good signal on 128 channels in less than 10 minutes from the time they walked in to data collection start.There is also a European company selling a similar get up, but the preamps are built into the cups on the net, making impedance matching irrelevant and signal balancing automatic on the fly.
These are so stable that they can be used ambulatory.And nobody ever has to get goop or glue stuck on/into them any more.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236848</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30239074</id>
	<title>Cost Effective?</title>
	<author>Flwyd</author>
	<datestamp>1259267520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Are brain scans really so cheap that it's cheaper to set up an EEG than to pay someone in a third-world country to do it?</p></htmltext>
<tokenext>Are brain scans really so cheap that it 's cheaper to set up an EEG than to pay someone in a third-world country to do it ?</tokentext>
<sentencetext>Are brain scans really so cheap that it's cheaper to set up an EEG than to pay someone in a third-world country to do it?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236740</id>
	<title>Re:Bwahahahahaa</title>
	<author>smitty777</author>
	<datestamp>1259249280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>They're not gettin' their mind probes through my freakin tin foil barrier</p></htmltext>
<tokenext>They 're not gettin ' their mind probes through my freakin tin foil barrier</tokentext>
<sentencetext>They're not gettin' their mind probes through my freakin tin foil barrier</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236570</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237222</id>
	<title>Re:What can go wrong?</title>
	<author>TropicalCoder</author>
	<datestamp>1259252820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Using an EEG scan of a person's brain while they view an image could yield very different results for an image of a naked woman depending on the viewer's sex or sexual persuasion. Also, for images of objects and images of people in general - each viewer would have a different set of associations for a given image. For example, imagine the EEG of a person with arachnophobia when presented with a picture of a spider, etc.</p></htmltext>
<tokenext>Using an EEG scan of a person 's brain while they view an image could yield very different results for an image of a naked woman depending on the viewer 's sex or sexual persuasion .
Also , for images of objects and images of people in general - each viewer would have a different set of associations for a given image .
For example , imagine the EEG of a person with arachnophobia when presented with a picture of a spider , etc .</tokentext>
<sentencetext>Using an EEG scan of a person's brain while they view an image could yield very different results for an image of a naked woman depending on the viewer's sex or sexual persuasion.
Also, for images of objects and images of people in general - each viewer would have a different set of associations for a given image.
For example, imagine the EEG of a person with arachnophobia when presented with a picture of a spider, etc.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236536</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30238182</id>
	<title>Re:Fun and Easy to Use</title>
	<author>jpmorgan</author>
	<datestamp>1259260260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>And as we all know, no technology that was slightly inconvenient in a lab has ever had any value or practical use.</p></htmltext>
<tokenext>And as we all know , no technology that was slightly inconvenient in a lab has ever had any value or practical use .</tokentext>
<sentencetext>And as we all know, no technology that was slightly inconvenient in a lab has ever had any value or practical use.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236848</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237114</id>
	<title>Re:Looks Good on Paper...</title>
	<author>ubrgeek</author>
	<datestamp>1259251980000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>0</modscore>
	<htmltext>Wonder if it might make <a href="http://www.fukung.net/" title="fukung.net" rel="nofollow">Fukung</a> [fukung.net] funny again.</htmltext>
<tokenext>Wonder if it might make Fukung [ fukung.net ] funny again .</tokentext>
<sentencetext>Wonder if it might make Fukung [fukung.net] funny again.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236518</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30241908</id>
	<title>So what?</title>
	<author>rantingkitten</author>
	<datestamp>1259249340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><i>Tagging images is an important task because many images on the web are unlabeled and have no semantic information. </i> <br>
<br>
And yet somehow we've managed to survive.  I've never really seen the point behind "tagging" much of anything.  In every implementation, it just amounts to a mostly random bunch of words that a mostly random person or group thought vaguely described the item at that time.  It's never been useful for finding more of the same because tags are so absurdly broad, and it's never been useful for narrowing down searches.  Most of the time they're not even useful for getting vague overviews of the item.<br>
<br>
Right here on slashdot, tags on the front page include "!change", "social", "donotwant", and "duh".  There will never be a point at which I am going to think "Gee, I'd sure like to read more stories about 'not-change'.  I'll just click this tag here..."  That doesn't even tell me what the story is about -- it only tells me that enough smartasses thought it was clever for some reason. <br>
<br>
The same pretty much applies to anything else that gets "tagged" online.  It's just noise.  Why does it matter?</htmltext>
<tokenext>Tagging images is an important task because many images on the web are unlabeled and have no semantic information .
And yet somehow we 've managed to survive .
I 've never really seen the point behind " tagging " much of anything .
In every implementation , it just amounts to a mostly random bunch of words that a mostly random person or group thought vaguely described the item at that time .
It 's never been useful for finding more of the same because tags are so absurdly broad , and it 's never been useful for narrowing down searches .
Most of the time they 're not even useful for getting vague overviews of the item .
Right here on slashdot , tags on the front page include " ! change " , " social " , " donotwant " , and " duh " .
There will never be a point at which I am going to think " Gee , I 'd sure like to read more stories about 'not-change' .
I 'll just click this tag here... " That does n't even tell me what the story is about -- it only tells me that enough smartasses thought it was clever for some reason .
The same pretty much applies to anything else that gets " tagged " online .
It 's just noise .
Why does it matter ?</tokentext>
<sentencetext>Tagging images is an important task because many images on the web are unlabeled and have no semantic information.
And yet somehow we've managed to survive.
I've never really seen the point behind "tagging" much of anything.
In every implementation, it just amounts to a mostly random bunch of words that a mostly random person or group thought vaguely described the item at that time.
It's never been useful for finding more of the same because tags are so absurdly broad, and it's never been useful for narrowing down searches.
Most of the time they're not even useful for getting vague overviews of the item.
Right here on slashdot, tags on the front page include "!change", "social", "donotwant", and "duh".
There will never be a point at which I am going to think "Gee, I'd sure like to read more stories about 'not-change'.
I'll just click this tag here..."  That doesn't even tell me what the story is about -- it only tells me that enough smartasses thought it was clever for some reason.
The same pretty much applies to anything else that gets "tagged" online.
It's just noise.
Why does it matter?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236628</id>
	<title>Boobs boobs boobs</title>
	<author>erroneus</author>
	<datestamp>1259248260000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>Microsoft, be warned.  Some people have a limited scope in terms of what they are thinking about at any given time.</p></htmltext>
<tokenext>Microsoft , be warned .
Some people have a limited scope in terms of what they are thinking about at any given time .</tokentext>
<sentencetext>Microsoft, be warned.
Some people have a limited scope in terms of what they are thinking about at any given time.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237880</id>
	<title>Re:Fun and Easy to Use</title>
	<author>Anonymous</author>
	<datestamp>1259257920000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Not to mention novel stimuli.<br>http://scholar.google.com/scholar?q=eeg+novel+stimuli&amp;hl=en&amp;client=firefox-a&amp;rls=org.mozilla:en-US:official&amp;hs=H3h&amp;um=1&amp;ie=UTF-8&amp;oi=scholart</p></htmltext>
<tokenext>Not to mention novel stimuli.http : //scholar.google.com/scholar ? q = eeg + novel + stimuli&amp;hl = en&amp;client = firefox-a&amp;rls = org.mozilla : en-US : official&amp;hs = H3h&amp;um = 1&amp;ie = UTF-8&amp;oi = scholart</tokentext>
<sentencetext>Not to mention novel stimuli.http://scholar.google.com/scholar?q=eeg+novel+stimuli&amp;hl=en&amp;client=firefox-a&amp;rls=org.mozilla:en-US:official&amp;hs=H3h&amp;um=1&amp;ie=UTF-8&amp;oi=scholart</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236848</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30239688</id>
	<title>Re:Looks Good on Paper...</title>
	<author>MrKaos</author>
	<datestamp>1259230260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>idhitit, awesome,ohgod, thehelliswrongwithme, whydidisavethisagain, ohthatswhy, shit, wallpaper, and photoshop</p></div></blockquote><p>
I wonder what Steve Ballmer would think seeing a picture of him throwing a chair.</p></div>
	</htmltext>
<tokenext>idhitit , awesome,ohgod , thehelliswrongwithme , whydidisavethisagain , ohthatswhy , shit , wallpaper , and photoshop I wonder what Steve Ballmer would think seeing a picture of him throwing a chair .</tokentext>
<sentencetext>idhitit, awesome,ohgod, thehelliswrongwithme, whydidisavethisagain, ohthatswhy, shit, wallpaper, and photoshop
I wonder what Steve Ballmer would think seeing a picture of him throwing a chair.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236518</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236848</id>
	<title>Fun and Easy to Use</title>
	<author>smitty777</author>
	<datestamp>1259250120000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>This is typical of MS -thinking that something like this would be easy for the average user.  FTA: <i>"However, the mind reading approach has the advantage that it does not require any work at all from the user."</i></p><p>So, in order to use this sytem, we should all strap on EEG caps while we're surfing the web.  Sounds real practical to me - I used to work in an EEG lab, and I can tell you that those caps are pretty uncomfortable to wear.  After they put them on, you stick these little needles into the leads and squirt conductive goop on your scalp. It takes a few cycles to rinse that stuff out too.</p><p>Way to go MS for making productivity so much easier.</p></htmltext>
<tokenext>This is typical of MS -thinking that something like this would be easy for the average user .
FTA : " However , the mind reading approach has the advantage that it does not require any work at all from the user .
" So , in order to use this sytem , we should all strap on EEG caps while we 're surfing the web .
Sounds real practical to me - I used to work in an EEG lab , and I can tell you that those caps are pretty uncomfortable to wear .
After they put them on , you stick these little needles into the leads and squirt conductive goop on your scalp .
It takes a few cycles to rinse that stuff out too.Way to go MS for making productivity so much easier .</tokentext>
<sentencetext>This is typical of MS -thinking that something like this would be easy for the average user.
FTA: "However, the mind reading approach has the advantage that it does not require any work at all from the user.
"So, in order to use this sytem, we should all strap on EEG caps while we're surfing the web.
Sounds real practical to me - I used to work in an EEG lab, and I can tell you that those caps are pretty uncomfortable to wear.
After they put them on, you stick these little needles into the leads and squirt conductive goop on your scalp.
It takes a few cycles to rinse that stuff out too.Way to go MS for making productivity so much easier.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237796</id>
	<title>Focus</title>
	<author>mumb0.jumb0</author>
	<datestamp>1259257380000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>What happens when I'm tagging a photo but listening to music at the same time?</p><p>Or I run the photo tagging software in a small window and watch a movie (or some porn) instead?</p><p>So they can create tags from brain waves, but there's no way to tell what a user is actually focussing on.</p></htmltext>
<tokenext>What happens when I 'm tagging a photo but listening to music at the same time ? Or I run the photo tagging software in a small window and watch a movie ( or some porn ) instead ? So they can create tags from brain waves , but there 's no way to tell what a user is actually focussing on .</tokentext>
<sentencetext>What happens when I'm tagging a photo but listening to music at the same time?Or I run the photo tagging software in a small window and watch a movie (or some porn) instead?So they can create tags from brain waves, but there's no way to tell what a user is actually focussing on.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237296</id>
	<title>Terminology Police</title>
	<author>smitty777</author>
	<datestamp>1259253300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Just one minor nitnoid: the title of this article should be "Tagging Images With Your <i> Brain</i>", not Mind.  Electrical impulses are used - using the word mind implies that some conscious effort is involved. This is strictly identifying patterns using machine algorithms independent of the user's thought process.</p></htmltext>
<tokenext>Just one minor nitnoid : the title of this article should be " Tagging Images With Your Brain " , not Mind .
Electrical impulses are used - using the word mind implies that some conscious effort is involved .
This is strictly identifying patterns using machine algorithms independent of the user 's thought process .</tokentext>
<sentencetext>Just one minor nitnoid: the title of this article should be "Tagging Images With Your  Brain", not Mind.
Electrical impulses are used - using the word mind implies that some conscious effort is involved.
This is strictly identifying patterns using machine algorithms independent of the user's thought process.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30240068</id>
	<title>Soon we can implement a lar!</title>
	<author>BoxedFlame</author>
	<datestamp>1259233800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><a href="http://cheeseburgerbrown.com/stories/Idiots\_Mask/" title="cheeseburgerbrown.com">I for one welcome our mask overlords</a> [cheeseburgerbrown.com]</p></htmltext>
<tokenext>I for one welcome our mask overlords [ cheeseburgerbrown.com ]</tokentext>
<sentencetext>I for one welcome our mask overlords [cheeseburgerbrown.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237216</id>
	<title>Re:I don't see it as innovative</title>
	<author>bluesatin</author>
	<datestamp>1259252760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Then you could use "thought macros" to control wearable computers.</p></div><p>What, like <a href="http://www.ocztechnology.com/products/ocz\_peripherals/nia-neural\_impulse\_actuator" title="ocztechnology.com">this</a> [ocztechnology.com] product?</p></div>
	</htmltext>
<tokenext>Then you could use " thought macros " to control wearable computers.What , like this [ ocztechnology.com ] product ?</tokentext>
<sentencetext>Then you could use "thought macros" to control wearable computers.What, like this [ocztechnology.com] product?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236820</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237850</id>
	<title>Re:Fun and Easy to Use</title>
	<author>Anonymous</author>
	<datestamp>1259257680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Um, yeah, because clearly they're planning to ship this with every copy of windows and include one of those EEG caps with it.</p><p>Oh wait, they're not as stupid as you think, so obviously they're going to go another way.</p></htmltext>
<tokenext>Um , yeah , because clearly they 're planning to ship this with every copy of windows and include one of those EEG caps with it.Oh wait , they 're not as stupid as you think , so obviously they 're going to go another way .</tokentext>
<sentencetext>Um, yeah, because clearly they're planning to ship this with every copy of windows and include one of those EEG caps with it.Oh wait, they're not as stupid as you think, so obviously they're going to go another way.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236848</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236984</id>
	<title>Re:Oh Microsoft...</title>
	<author>TheLink</author>
	<datestamp>1259251020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>&gt; The variance in electrical activity in the brain between two people receiving the same sensory input is, in an average way, too great to be useful.<br><br>You shouldn't directly use the "thought pattern" of a person to tag the data.<br><br>As you said, the thought patterns are likely to be different from person to person.<br><br>What you do though is, for each tagging participant, you get the person's thought patterns for a whole bunch of tags.<br><br>Then they can tag stuff really quickly just by looking at them. The advanced people might be able to manage multi-tags with a single thought pattern.<br><br>Lastly, I don't think you necessarily have to use a thought pattern that's related to the object.</htmltext>
<tokenext>&gt; The variance in electrical activity in the brain between two people receiving the same sensory input is , in an average way , too great to be useful.You should n't directly use the " thought pattern " of a person to tag the data.As you said , the thought patterns are likely to be different from person to person.What you do though is , for each tagging participant , you get the person 's thought patterns for a whole bunch of tags.Then they can tag stuff really quickly just by looking at them .
The advanced people might be able to manage multi-tags with a single thought pattern.Lastly , I do n't think you necessarily have to use a thought pattern that 's related to the object .</tokentext>
<sentencetext>&gt; The variance in electrical activity in the brain between two people receiving the same sensory input is, in an average way, too great to be useful.You shouldn't directly use the "thought pattern" of a person to tag the data.As you said, the thought patterns are likely to be different from person to person.What you do though is, for each tagging participant, you get the person's thought patterns for a whole bunch of tags.Then they can tag stuff really quickly just by looking at them.
The advanced people might be able to manage multi-tags with a single thought pattern.Lastly, I don't think you necessarily have to use a thought pattern that's related to the object.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236658</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236518</id>
	<title>Looks Good on Paper...</title>
	<author>Anonymous</author>
	<datestamp>1259247360000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext><p>Honestly this is nice, but seriously if my mind was tagging my images there would be something like the following list of tags<br>idhitit, awesome,ohgod, thehelliswrongwithme, whydidisavethisagain, ohthatswhy, shit, wallpaper, and photoshop<br>and that's just keeping it within PG-13.</p></htmltext>
<tokenext>Honestly this is nice , but seriously if my mind was tagging my images there would be something like the following list of tagsidhitit , awesome,ohgod , thehelliswrongwithme , whydidisavethisagain , ohthatswhy , shit , wallpaper , and photoshopand that 's just keeping it within PG-13 .</tokentext>
<sentencetext>Honestly this is nice, but seriously if my mind was tagging my images there would be something like the following list of tagsidhitit, awesome,ohgod, thehelliswrongwithme, whydidisavethisagain, ohthatswhy, shit, wallpaper, and photoshopand that's just keeping it within PG-13.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236620</id>
	<title>Foo</title>
	<author>Anonymous</author>
	<datestamp>1259248200000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Using an EEG? Amateurs. They should be making and using a direct neural interface<nobr> <wbr></nobr>:)</htmltext>
<tokenext>Using an EEG ?
Amateurs. They should be making and using a direct neural interface : )</tokentext>
<sentencetext>Using an EEG?
Amateurs. They should be making and using a direct neural interface :)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236782</id>
	<title>Re:I mentally tagged this as MenWhoStateAtGoats</title>
	<author>Anonymous</author>
	<datestamp>1259249640000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext><p>More importantly, <i>what</i> are they stating at goats?</p></htmltext>
<tokenext>More importantly , what are they stating at goats ?</tokentext>
<sentencetext>More importantly, what are they stating at goats?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236528</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30283762</id>
	<title>recaptcha-like</title>
	<author>DriveDog</author>
	<datestamp>1259685420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>How about, with or without the brain sensors, a Recaptcha-like system where multiple people tag each photo and each person has to tag several photos, one already heavily processed and one as-yet-untagged? This might prove to be a lot more difficult for machines to do than text systems. On the other hand, if the determined spammers out there figure out how to get a machine to do it, then they've done our work and we now have a machine that can tag photos.</htmltext>
<tokenext>How about , with or without the brain sensors , a Recaptcha-like system where multiple people tag each photo and each person has to tag several photos , one already heavily processed and one as-yet-untagged ?
This might prove to be a lot more difficult for machines to do than text systems .
On the other hand , if the determined spammers out there figure out how to get a machine to do it , then they 've done our work and we now have a machine that can tag photos .</tokentext>
<sentencetext>How about, with or without the brain sensors, a Recaptcha-like system where multiple people tag each photo and each person has to tag several photos, one already heavily processed and one as-yet-untagged?
This might prove to be a lot more difficult for machines to do than text systems.
On the other hand, if the determined spammers out there figure out how to get a machine to do it, then they've done our work and we now have a machine that can tag photos.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236820</id>
	<title>I don't see it as innovative</title>
	<author>TheLink</author>
	<datestamp>1259249880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>It's not going to replace keyboard tagging now.<br><br>But in the future more advanced versions might.<br><br>Then you could use "thought macros" to control wearable computers.<br><br>The measurements of thought patterns are likely to be specific to each person. So devices that use thought input would have to be trained.<br><br>But after that, you could be thinking of stuff like "purple green striped elephant" as the escape sequence to tell the computer to start listening in and doing stuff based on the thought patterns it recognizes (which could include: take a picture of this, and associate it with this current thought pattern). You then use a different unique thought pattern to get it to stop (or press a manual button if stuff screws up<nobr> <wbr></nobr>:) ).<br><br>Then whenever you think of a thought pattern the picture or other computer memorized object (audio, url, file) will be recalled.<br><br>I have long considered this as the next step in augmenting humans.<br><br>Once you have that, people will have perfect photographic, "videographic" and "audiographic" memory, and be capable of "virtual telepathy" e.g. communicate with others just by thinking (which is trivial obvious step once you combine the thought macro stuff with wireless comms).<br><br>The problem people have to realize is the MPAA, RIAA, DRM and restrictive copyright laws will get in the way of such augmentation. It will either be crippled, prohibited or taxed severely.<br><br>They may not be happy with just a "penny for your thoughts", especially if they have got the law to consider it as "their thoughts" and their property, and not yours.<br><br>p.s. I don't see this development as "innovative", since I consider it a rather obvious step (along with the other rather obvious steps I've mentioned in this post), but I'm sure it's probably patented etc<nobr> <wbr></nobr>;).</htmltext>
<tokenext>It 's not going to replace keyboard tagging now.But in the future more advanced versions might.Then you could use " thought macros " to control wearable computers.The measurements of thought patterns are likely to be specific to each person .
So devices that use thought input would have to be trained.But after that , you could be thinking of stuff like " purple green striped elephant " as the escape sequence to tell the computer to start listening in and doing stuff based on the thought patterns it recognizes ( which could include : take a picture of this , and associate it with this current thought pattern ) .
You then use a different unique thought pattern to get it to stop ( or press a manual button if stuff screws up : ) ) .Then whenever you think of a thought pattern the picture or other computer memorized object ( audio , url , file ) will be recalled.I have long considered this as the next step in augmenting humans.Once you have that , people will have perfect photographic , " videographic " and " audiographic " memory , and be capable of " virtual telepathy " e.g .
communicate with others just by thinking ( which is trivial obvious step once you combine the thought macro stuff with wireless comms ) .The problem people have to realize is the MPAA , RIAA , DRM and restrictive copyright laws will get in the way of such augmentation .
It will either be crippled , prohibited or taxed severely.They may not be happy with just a " penny for your thoughts " , especially if they have got the law to consider it as " their thoughts " and their property , and not yours.p.s .
I do n't see this development as " innovative " , since I consider it a rather obvious step ( along with the other rather obvious steps I 've mentioned in this post ) , but I 'm sure it 's probably patented etc ; ) .</tokentext>
<sentencetext>It's not going to replace keyboard tagging now.But in the future more advanced versions might.Then you could use "thought macros" to control wearable computers.The measurements of thought patterns are likely to be specific to each person.
So devices that use thought input would have to be trained.But after that, you could be thinking of stuff like "purple green striped elephant" as the escape sequence to tell the computer to start listening in and doing stuff based on the thought patterns it recognizes (which could include: take a picture of this, and associate it with this current thought pattern).
You then use a different unique thought pattern to get it to stop (or press a manual button if stuff screws up :) ).Then whenever you think of a thought pattern the picture or other computer memorized object (audio, url, file) will be recalled.I have long considered this as the next step in augmenting humans.Once you have that, people will have perfect photographic, "videographic" and "audiographic" memory, and be capable of "virtual telepathy" e.g.
communicate with others just by thinking (which is trivial obvious step once you combine the thought macro stuff with wireless comms).The problem people have to realize is the MPAA, RIAA, DRM and restrictive copyright laws will get in the way of such augmentation.
It will either be crippled, prohibited or taxed severely.They may not be happy with just a "penny for your thoughts", especially if they have got the law to consider it as "their thoughts" and their property, and not yours.p.s.
I don't see this development as "innovative", since I consider it a rather obvious step (along with the other rather obvious steps I've mentioned in this post), but I'm sure it's probably patented etc ;).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236558</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236596</id>
	<title>Cool!</title>
	<author>Rik Sweeney</author>
	<datestamp>1259248020000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>Just don't let Rorschach tag any images of ink blot tests.</p></htmltext>
<tokenext>Just do n't let Rorschach tag any images of ink blot tests .</tokentext>
<sentencetext>Just don't let Rorschach tag any images of ink blot tests.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237940</id>
	<title>Re:Fun and Easy to Use</title>
	<author>Tim C</author>
	<datestamp>1259258520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You used to work in a lab, so you ought to be familiar with how research works, and how often it produces actual products.</p><p>Forget the practicalities of people doing this in their homes; in principle, I think it's pretty damn cool.</p></htmltext>
<tokenext>You used to work in a lab , so you ought to be familiar with how research works , and how often it produces actual products.Forget the practicalities of people doing this in their homes ; in principle , I think it 's pretty damn cool .</tokentext>
<sentencetext>You used to work in a lab, so you ought to be familiar with how research works, and how often it produces actual products.Forget the practicalities of people doing this in their homes; in principle, I think it's pretty damn cool.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236848</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236562</id>
	<title>Have to be picky about your subjects</title>
	<author>RandomFactor</author>
	<datestamp>1259247780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>One can imagine a system that tags images by reading your mind as you surf the web.  If Google Image Search needed to tag an image, it could just pop it up in a window for 500 ms and read your thoughts to get the tag.</p></div></blockquote><p>


Wouldn't work with teenage males for example....</p></div>
	</htmltext>
<tokenext>One can imagine a system that tags images by reading your mind as you surf the web .
If Google Image Search needed to tag an image , it could just pop it up in a window for 500 ms and read your thoughts to get the tag .
Would n't work with teenage males for example... .</tokentext>
<sentencetext>One can imagine a system that tags images by reading your mind as you surf the web.
If Google Image Search needed to tag an image, it could just pop it up in a window for 500 ms and read your thoughts to get the tag.
Wouldn't work with teenage males for example....
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30238474</id>
	<title>Deep brain?</title>
	<author>benow</author>
	<datestamp>1259262660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Any image that stimulates brain activity not close to the surface is unclassifiable?</htmltext>
<tokenext>Any image that stimulates brain activity not close to the surface is unclassifiable ?</tokentext>
<sentencetext>Any image that stimulates brain activity not close to the surface is unclassifiable?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30240442</id>
	<title>Integrate it with an intelligent vocabulary</title>
	<author>chetbox</author>
	<datestamp>1259237040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Maybe if this were integrated with an intelligent tag vocabulary, such as the one at <a href="http://annotator.imense.com/faq-annotator/#difference" title="imense.com" rel="nofollow">http://annotator.imense.com/faq-annotator/</a> [imense.com], the lives of those poor people manually tagging these images can be improved. Not to mention the potential increase in accuracy.</htmltext>
<tokenext>Maybe if this were integrated with an intelligent tag vocabulary , such as the one at http : //annotator.imense.com/faq-annotator/ [ imense.com ] , the lives of those poor people manually tagging these images can be improved .
Not to mention the potential increase in accuracy .</tokentext>
<sentencetext>Maybe if this were integrated with an intelligent tag vocabulary, such as the one at http://annotator.imense.com/faq-annotator/ [imense.com], the lives of those poor people manually tagging these images can be improved.
Not to mention the potential increase in accuracy.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237138</id>
	<title>Re:Fun and Easy to Use</title>
	<author>should\_be\_linear</author>
	<datestamp>1259252280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Still, it is improvement from Outlook.</htmltext>
<tokenext>Still , it is improvement from Outlook .</tokentext>
<sentencetext>Still, it is improvement from Outlook.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236848</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30239548</id>
	<title>Am I EEG Or Not</title>
	<author>lennier</author>
	<datestamp>1259229060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>So we'll now have automatic 'Like', 'Dislike' and 'Eeeeeeagh my visual cortex where is the brain soap' responses?</p></htmltext>
<tokenext>So we 'll now have automatic 'Like ' , 'Dislike ' and 'Eeeeeeagh my visual cortex where is the brain soap ' responses ?</tokentext>
<sentencetext>So we'll now have automatic 'Like', 'Dislike' and 'Eeeeeeagh my visual cortex where is the brain soap' responses?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236898</id>
	<title>Re:Looks Good on Paper...</title>
	<author>Anonymous</author>
	<datestamp>1259250420000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>So you frequent 4chans<nobr> <wbr></nobr>/b/, eh?</p></htmltext>
<tokenext>So you frequent 4chans /b/ , eh ?</tokentext>
<sentencetext>So you frequent 4chans /b/, eh?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236518</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236528</id>
	<title>I mentally tagged this as MenWhoStateAtGoats</title>
	<author>Anonymous</author>
	<datestamp>1259247480000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>It didn't take, though. What gives?</p></htmltext>
<tokenext>It did n't take , though .
What gives ?</tokentext>
<sentencetext>It didn't take, though.
What gives?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236692</id>
	<title>Offtopic (-1)</title>
	<author>Anonymous</author>
	<datestamp>1259248740000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Why does Microsoft throw money at such research topics -- which seem never to reach their products -- when they also could use the money to improve their software? If they continue like this, they may be researching the evolution of flying chairs in a few years...</p></htmltext>
<tokenext>Why does Microsoft throw money at such research topics -- which seem never to reach their products -- when they also could use the money to improve their software ?
If they continue like this , they may be researching the evolution of flying chairs in a few years.. .</tokentext>
<sentencetext>Why does Microsoft throw money at such research topics -- which seem never to reach their products -- when they also could use the money to improve their software?
If they continue like this, they may be researching the evolution of flying chairs in a few years...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236558</id>
	<title>Interesting, but needs a lot of work</title>
	<author>Shrike82</author>
	<datestamp>1259247720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>From the summary:<p><div class="quote"><p>This new method allows an appropriate tag to be generated by an AI algorithm interpreting the EEG scan of a person's brain while they view an image.</p></div><p>That's true, as long as "appropriate" means it was either X or Y, as the system really only works on discriminating between things like "a face" and "not a face". It's an interesting piece of research, sure, but it sure as hell won't replace good old fashioned tagging using a keyboard.</p></div>
	</htmltext>
<tokenext>From the summary : This new method allows an appropriate tag to be generated by an AI algorithm interpreting the EEG scan of a person 's brain while they view an image.That 's true , as long as " appropriate " means it was either X or Y , as the system really only works on discriminating between things like " a face " and " not a face " .
It 's an interesting piece of research , sure , but it sure as hell wo n't replace good old fashioned tagging using a keyboard .</tokentext>
<sentencetext>From the summary:This new method allows an appropriate tag to be generated by an AI algorithm interpreting the EEG scan of a person's brain while they view an image.That's true, as long as "appropriate" means it was either X or Y, as the system really only works on discriminating between things like "a face" and "not a face".
It's an interesting piece of research, sure, but it sure as hell won't replace good old fashioned tagging using a keyboard.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237544</id>
	<title>3-class</title>
	<author>Metasquares</author>
	<datestamp>1259255340000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>Useful, but real-world tagging is much more specific than "person", "animal", or "inanimate". The number of classes required in the classification task is thus far greater and one would expect the accuracy to be proportionally lower.

OTOH, it could be a great preprocessing step for further manual analysis, or a step in a hierarchical clustering algorithm. Or maybe 3 classes suffice for certain specific situations.</htmltext>
<tokenext>Useful , but real-world tagging is much more specific than " person " , " animal " , or " inanimate " .
The number of classes required in the classification task is thus far greater and one would expect the accuracy to be proportionally lower .
OTOH , it could be a great preprocessing step for further manual analysis , or a step in a hierarchical clustering algorithm .
Or maybe 3 classes suffice for certain specific situations .</tokentext>
<sentencetext>Useful, but real-world tagging is much more specific than "person", "animal", or "inanimate".
The number of classes required in the classification task is thus far greater and one would expect the accuracy to be proportionally lower.
OTOH, it could be a great preprocessing step for further manual analysis, or a step in a hierarchical clustering algorithm.
Or maybe 3 classes suffice for certain specific situations.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236570</id>
	<title>Bwahahahahaa</title>
	<author>NoYob</author>
	<datestamp>1259247780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Yes, Microsoft <i>can</i> read your minds now! Next year, you'll have to plug this device in and wear it to run Windows. <p>MSFT: "Is his Windows and Office license legit? Let's read his mind and find out."</p><p>"Does he also run Linux?" <br>"Yep. Crank up up the juice and reprogram him."</p><p>The other thing I'd like mention, being only on my second cup this morning, those guys in the graphic when looked at quickly looked like they were wearing thigh high stockings.</p></htmltext>
<tokenext>Yes , Microsoft can read your minds now !
Next year , you 'll have to plug this device in and wear it to run Windows .
MSFT : " Is his Windows and Office license legit ?
Let 's read his mind and find out .
" " Does he also run Linux ?
" " Yep .
Crank up up the juice and reprogram him .
" The other thing I 'd like mention , being only on my second cup this morning , those guys in the graphic when looked at quickly looked like they were wearing thigh high stockings .</tokentext>
<sentencetext>Yes, Microsoft can read your minds now!
Next year, you'll have to plug this device in and wear it to run Windows.
MSFT: "Is his Windows and Office license legit?
Let's read his mind and find out.
""Does he also run Linux?
" "Yep.
Crank up up the juice and reprogram him.
"The other thing I'd like mention, being only on my second cup this morning, those guys in the graphic when looked at quickly looked like they were wearing thigh high stockings.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236654</id>
	<title>Connecting an EEG reader to the Internet...</title>
	<author>Hurricane78</author>
	<datestamp>1259248500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>...what could <em>possibly</em> go wrong? ^^</p></htmltext>
<tokenext>...what could possibly go wrong ?
^ ^</tokentext>
<sentencetext>...what could possibly go wrong?
^^</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237180</id>
	<title>no semantic information? Ahem *cough*</title>
	<author>Anonymous</author>
	<datestamp>1259252580000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p><div class="quote"><p>unlabeled and have no <b>computer-readable</b> semantic information.</p></div><p>There, fixed that for you.</p><p>Seriously, the old saying "an image is worth 1,000 words" implies that images frequently have semantic information, at least in the sense that anything on paper can have semantic information.  It's just that computers can't parse it, catalog it, search on it, etc. Not well, anyways, not yet.  They are getting there though.</p></div>
	</htmltext>
<tokenext>unlabeled and have no computer-readable semantic information.There , fixed that for you.Seriously , the old saying " an image is worth 1,000 words " implies that images frequently have semantic information , at least in the sense that anything on paper can have semantic information .
It 's just that computers ca n't parse it , catalog it , search on it , etc .
Not well , anyways , not yet .
They are getting there though .</tokentext>
<sentencetext>unlabeled and have no computer-readable semantic information.There, fixed that for you.Seriously, the old saying "an image is worth 1,000 words" implies that images frequently have semantic information, at least in the sense that anything on paper can have semantic information.
It's just that computers can't parse it, catalog it, search on it, etc.
Not well, anyways, not yet.
They are getting there though.
	</sentencetext>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_26_0322227_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236782
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236528
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_26_0322227_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237138
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236848
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_26_0322227_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237880
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236848
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_26_0322227_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236740
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236570
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_26_0322227_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237940
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236848
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_26_0322227_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30238508
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236848
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_26_0322227_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237850
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236848
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_26_0322227_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237222
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236536
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_26_0322227_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236898
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236518
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_26_0322227_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30238182
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236848
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_26_0322227_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30239688
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236518
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_26_0322227_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236984
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236658
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_26_0322227_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237114
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236518
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_26_0322227_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237216
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236820
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236558
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_26_0322227.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236558
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236820
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237216
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_26_0322227.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236628
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_26_0322227.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236658
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236984
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_26_0322227.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236518
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236898
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30239688
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237114
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_26_0322227.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236536
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237222
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_26_0322227.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236654
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_26_0322227.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236570
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236740
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_26_0322227.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236848
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237880
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30238508
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237940
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237850
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30238182
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237138
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_26_0322227.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236528
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30236782
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_26_0322227.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237796
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_26_0322227.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_26_0322227.30237180
</commentlist>
</conversation>
