<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_11_27_1851243</id>
	<title>Building 3D Models On the Fly With a Webcam</title>
	<author>kdawson</author>
	<datestamp>1259349540000</datestamp>
	<htmltext>blee37 writes <i>"Here is an excellent video demonstration of a new program developed by <a href="http://mi.eng.cam.ac.uk/~qp202">Qi Pan</a>, a graduate student, and other researchers at the University of Cambridge. The 'ProFORMA' software <a href="http://scitedaily.com/2009/11/25/building-3d-models-on-the-fly-using-a-webcam/">constructs a 3D model of an object in real time</a> from (commodity) webcam video. The user can watch the program deduce more pieces of the 3D model as the object is moved and rotated. The resulting graphics are of high quality."</i></htmltext>
<tokenext>blee37 writes " Here is an excellent video demonstration of a new program developed by Qi Pan , a graduate student , and other researchers at the University of Cambridge .
The 'ProFORMA ' software constructs a 3D model of an object in real time from ( commodity ) webcam video .
The user can watch the program deduce more pieces of the 3D model as the object is moved and rotated .
The resulting graphics are of high quality .
"</tokentext>
<sentencetext>blee37 writes "Here is an excellent video demonstration of a new program developed by Qi Pan, a graduate student, and other researchers at the University of Cambridge.
The 'ProFORMA' software constructs a 3D model of an object in real time from (commodity) webcam video.
The user can watch the program deduce more pieces of the 3D model as the object is moved and rotated.
The resulting graphics are of high quality.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30254434</id>
	<title>Re:Hello 'Likeness Theft'</title>
	<author>Anonymous</author>
	<datestamp>1259422680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Well ive been doing this for years. All you needed was a 3d tracking app and then build the meshbased on those computed points.</p><p>In fact this is not the first incarnation of these kinds of apps. An dont call the models they make high quality. But its cartainly a quite good implementation.</p><p>The big problem with scanned 3d is that its often too dense has points in the worng place and is too heavy or too sparse. It lacks the flow. This leads to unusable 3d model</p></htmltext>
<tokenext>Well ive been doing this for years .
All you needed was a 3d tracking app and then build the meshbased on those computed points.In fact this is not the first incarnation of these kinds of apps .
An dont call the models they make high quality .
But its cartainly a quite good implementation.The big problem with scanned 3d is that its often too dense has points in the worng place and is too heavy or too sparse .
It lacks the flow .
This leads to unusable 3d model</tokentext>
<sentencetext>Well ive been doing this for years.
All you needed was a 3d tracking app and then build the meshbased on those computed points.In fact this is not the first incarnation of these kinds of apps.
An dont call the models they make high quality.
But its cartainly a quite good implementation.The big problem with scanned 3d is that its often too dense has points in the worng place and is too heavy or too sparse.
It lacks the flow.
This leads to unusable 3d model</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248320</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30247998</id>
	<title>The Death of Hollywood</title>
	<author>Anonymous</author>
	<datestamp>1259353740000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext>With open-source rendering images already well established and continually improving that only leaves the content areas under-developed.  This method will allow anyone with an object to digitize it.  This will enable people to take that content and then mix it in virtual environments.  Throw in some voice-synthesis software, some directing software, and a million monkeys hammering away at plots then Hollywood as an institution is dead.  This is another piece, the others will fall into line as well.  It is ironic in that in one of the Civilization games, discovering the Internet invalidates the Hollywood Wonder.</htmltext>
<tokenext>With open-source rendering images already well established and continually improving that only leaves the content areas under-developed .
This method will allow anyone with an object to digitize it .
This will enable people to take that content and then mix it in virtual environments .
Throw in some voice-synthesis software , some directing software , and a million monkeys hammering away at plots then Hollywood as an institution is dead .
This is another piece , the others will fall into line as well .
It is ironic in that in one of the Civilization games , discovering the Internet invalidates the Hollywood Wonder .</tokentext>
<sentencetext>With open-source rendering images already well established and continually improving that only leaves the content areas under-developed.
This method will allow anyone with an object to digitize it.
This will enable people to take that content and then mix it in virtual environments.
Throw in some voice-synthesis software, some directing software, and a million monkeys hammering away at plots then Hollywood as an institution is dead.
This is another piece, the others will fall into line as well.
It is ironic in that in one of the Civilization games, discovering the Internet invalidates the Hollywood Wonder.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248664</id>
	<title>A New Video Toaster</title>
	<author>Anonymous</author>
	<datestamp>1259315040000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Yeah and the Video Toaster "took the power out of the hands of the Networks and put it in the hands of the People"</p><p>Blah Blah Blah</p><p>Get off my lawn.</p></htmltext>
<tokenext>Yeah and the Video Toaster " took the power out of the hands of the Networks and put it in the hands of the People " Blah Blah BlahGet off my lawn .</tokentext>
<sentencetext>Yeah and the Video Toaster "took the power out of the hands of the Networks and put it in the hands of the People"Blah Blah BlahGet off my lawn.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248350</id>
	<title>Needs Chroma Keying</title>
	<author>Anonymous</author>
	<datestamp>1259313060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It may help the modeling process if chroma keying could be used to make the camera ignore supporting objects like a turn table or a hand that's holding the object.  Another improvement could be to automatically cut out excess vectors and triangles.  It wouldn't be too difficult (for someone who can make this type of software) to determine that the plane that makes up the side of the demo building is virtually flat and reduce the complexity to two triangles.</p><p>One of the key limiting factors to amateurs making mods for popular games is the expense and complexity of 3D modeling software.  Few good coders are also artists.  And few good artists are also coders.</p><p>Back in the day anyone could put Barney in DooM or Wolf3D.  Now it takes expensive software and a lot of time to shoot up your favorite character you love to hate.</p></htmltext>
<tokenext>It may help the modeling process if chroma keying could be used to make the camera ignore supporting objects like a turn table or a hand that 's holding the object .
Another improvement could be to automatically cut out excess vectors and triangles .
It would n't be too difficult ( for someone who can make this type of software ) to determine that the plane that makes up the side of the demo building is virtually flat and reduce the complexity to two triangles.One of the key limiting factors to amateurs making mods for popular games is the expense and complexity of 3D modeling software .
Few good coders are also artists .
And few good artists are also coders.Back in the day anyone could put Barney in DooM or Wolf3D .
Now it takes expensive software and a lot of time to shoot up your favorite character you love to hate .</tokentext>
<sentencetext>It may help the modeling process if chroma keying could be used to make the camera ignore supporting objects like a turn table or a hand that's holding the object.
Another improvement could be to automatically cut out excess vectors and triangles.
It wouldn't be too difficult (for someone who can make this type of software) to determine that the plane that makes up the side of the demo building is virtually flat and reduce the complexity to two triangles.One of the key limiting factors to amateurs making mods for popular games is the expense and complexity of 3D modeling software.
Few good coders are also artists.
And few good artists are also coders.Back in the day anyone could put Barney in DooM or Wolf3D.
Now it takes expensive software and a lot of time to shoot up your favorite character you love to hate.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30253462</id>
	<title>Re:The Death of Hollywood</title>
	<author>mldi</author>
	<datestamp>1259406360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>I take it you aren't used to using Poser or Blender, or any other related 3-d software and thus don't know the joy of: "You STUPID PROGRAM! I just want her to walk down the stairs! Why are her arms doing that! NO! NO! NOO!!!! Stop floating down the stairs and walk! Why is your hair clipping through the wall, why is your hair even moving that way! STOP IT!"</p><p>Hollywood's death knell might be sounding. But it's got a few more good decades in it left before we need to morn for it.</p></div><p>
Hilarious you bring that up. Did you ever look at the model of <a href="http://www.bigbuckbunny.org/" title="bigbuckbunny.org" rel="nofollow">Big Buck Bunny</a> [bigbuckbunny.org]? It's like his face is sucked in down his throat, just so that when it renders, it looks like they way they want it to. It's horribly fucked up for anything more complex than snowmen or giant walking stick men.</p></div>
	</htmltext>
<tokenext>I take it you are n't used to using Poser or Blender , or any other related 3-d software and thus do n't know the joy of : " You STUPID PROGRAM !
I just want her to walk down the stairs !
Why are her arms doing that !
NO ! NO !
NOO ! ! ! ! Stop floating down the stairs and walk !
Why is your hair clipping through the wall , why is your hair even moving that way !
STOP IT !
" Hollywood 's death knell might be sounding .
But it 's got a few more good decades in it left before we need to morn for it .
Hilarious you bring that up .
Did you ever look at the model of Big Buck Bunny [ bigbuckbunny.org ] ?
It 's like his face is sucked in down his throat , just so that when it renders , it looks like they way they want it to .
It 's horribly fucked up for anything more complex than snowmen or giant walking stick men .</tokentext>
<sentencetext>I take it you aren't used to using Poser or Blender, or any other related 3-d software and thus don't know the joy of: "You STUPID PROGRAM!
I just want her to walk down the stairs!
Why are her arms doing that!
NO! NO!
NOO!!!! Stop floating down the stairs and walk!
Why is your hair clipping through the wall, why is your hair even moving that way!
STOP IT!
"Hollywood's death knell might be sounding.
But it's got a few more good decades in it left before we need to morn for it.
Hilarious you bring that up.
Did you ever look at the model of Big Buck Bunny [bigbuckbunny.org]?
It's like his face is sucked in down his throat, just so that when it renders, it looks like they way they want it to.
It's horribly fucked up for anything more complex than snowmen or giant walking stick men.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248404</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248976</id>
	<title>Re:Needs Chroma Keying</title>
	<author>Anonymous</author>
	<datestamp>1259316660000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I think one of the cool things here is that chroma keying wasn't needed.</p></htmltext>
<tokenext>I think one of the cool things here is that chroma keying was n't needed .</tokentext>
<sentencetext>I think one of the cool things here is that chroma keying wasn't needed.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248350</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248320</id>
	<title>Hello 'Likeness Theft'</title>
	<author>kbob88</author>
	<datestamp>1259312940000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>I can just see it now -- anyone who can get a bit of video of you can create a 3-D models of your face and body, and then do anything with the likeness. When rendering gets really good, this could be a bit embarrassing. Instead of 2D retouched photos of celebrities and politicians, we'll be seeing hacked up 'animated' (but realistic) video of them doing all sorts of wild stuff. Well, it might be a boon to the porn industry, at least in the short-term before the rendering software becomes available to consumers.</p></htmltext>
<tokenext>I can just see it now -- anyone who can get a bit of video of you can create a 3-D models of your face and body , and then do anything with the likeness .
When rendering gets really good , this could be a bit embarrassing .
Instead of 2D retouched photos of celebrities and politicians , we 'll be seeing hacked up 'animated ' ( but realistic ) video of them doing all sorts of wild stuff .
Well , it might be a boon to the porn industry , at least in the short-term before the rendering software becomes available to consumers .</tokentext>
<sentencetext>I can just see it now -- anyone who can get a bit of video of you can create a 3-D models of your face and body, and then do anything with the likeness.
When rendering gets really good, this could be a bit embarrassing.
Instead of 2D retouched photos of celebrities and politicians, we'll be seeing hacked up 'animated' (but realistic) video of them doing all sorts of wild stuff.
Well, it might be a boon to the porn industry, at least in the short-term before the rendering software becomes available to consumers.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30251118</id>
	<title>Re:Academic projects versus commercial application</title>
	<author>Telvin\_3d</author>
	<datestamp>1259328240000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>The major reason that these types of programs don't get expanded into commercial products or bought and integrated into existing products is that they are cute tech demos but not particularly real-world interesting.</p><p>Almost without exception anything simple enough for these types of reconstruction programs to handle is too simple to bother with. The paper church in the demo video for instance. The final wire-frame product is, sadly, crap. Neat and interesting crap but still crap. There are at least 3 times the polys that the form needs and almost all of the significant edges are in the wrong place. In the time it would take to clean up the data into something worth using I could build a better model form scratch including textures.</p><p>There are perhaps some very niche uses for this in terms of augmented reality. It could be integrated into a game or chat program to give a more realistic version of those make-an-avatar-from-your-webcam gimmicks that seem to gain attention every once and a while. If this guy has developed some very good algorithms he might get the interest of some of the match-moving software companies like Syntheyes.</p><p>But the reason this kind of this never shows up in profesional 3D packages is that if you are good enough to be using the software professionally you are good enough not to need these kinds of crutches. It's the 3D equivalent of Dreamweaver's auto-generated spaghetti code.</p></htmltext>
<tokenext>The major reason that these types of programs do n't get expanded into commercial products or bought and integrated into existing products is that they are cute tech demos but not particularly real-world interesting.Almost without exception anything simple enough for these types of reconstruction programs to handle is too simple to bother with .
The paper church in the demo video for instance .
The final wire-frame product is , sadly , crap .
Neat and interesting crap but still crap .
There are at least 3 times the polys that the form needs and almost all of the significant edges are in the wrong place .
In the time it would take to clean up the data into something worth using I could build a better model form scratch including textures.There are perhaps some very niche uses for this in terms of augmented reality .
It could be integrated into a game or chat program to give a more realistic version of those make-an-avatar-from-your-webcam gimmicks that seem to gain attention every once and a while .
If this guy has developed some very good algorithms he might get the interest of some of the match-moving software companies like Syntheyes.But the reason this kind of this never shows up in profesional 3D packages is that if you are good enough to be using the software professionally you are good enough not to need these kinds of crutches .
It 's the 3D equivalent of Dreamweaver 's auto-generated spaghetti code .</tokentext>
<sentencetext>The major reason that these types of programs don't get expanded into commercial products or bought and integrated into existing products is that they are cute tech demos but not particularly real-world interesting.Almost without exception anything simple enough for these types of reconstruction programs to handle is too simple to bother with.
The paper church in the demo video for instance.
The final wire-frame product is, sadly, crap.
Neat and interesting crap but still crap.
There are at least 3 times the polys that the form needs and almost all of the significant edges are in the wrong place.
In the time it would take to clean up the data into something worth using I could build a better model form scratch including textures.There are perhaps some very niche uses for this in terms of augmented reality.
It could be integrated into a game or chat program to give a more realistic version of those make-an-avatar-from-your-webcam gimmicks that seem to gain attention every once and a while.
If this guy has developed some very good algorithms he might get the interest of some of the match-moving software companies like Syntheyes.But the reason this kind of this never shows up in profesional 3D packages is that if you are good enough to be using the software professionally you are good enough not to need these kinds of crutches.
It's the 3D equivalent of Dreamweaver's auto-generated spaghetti code.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249976</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30253276</id>
	<title>Motion Capture - Yahoo group</title>
	<author>sonamchauhan</author>
	<datestamp>1259401740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>People interested in this area may find the 'Motion Capture' Yahoo group useful.</p><p>Its website is located here:<br><a href="http://movies.groups.yahoo.com/group/motioncapture/" title="yahoo.com">http://movies.groups.yahoo.com/group/motioncapture/</a> [yahoo.com]</p><p>A recent interesting message from the group (edited to evade<nobr> <wbr></nobr>./ Junk filter):</p><p>---------- Forwarded message ----------<br>From: Brad Friedman <br>Date: Sun, Nov 15, 2009 at 9:35 AM<br>Subject: [motioncapture] releasing some optitrack open source software and code<br>To: mocap list </p><p>Hey all.</p><p>Been a while. I've been rather busy with other things.</p><p>I'm releasing some OptiTrack related open source software and code.</p><p>Two simple modules have been released. One is LGPL and one is BSD.</p><p><a href="http://www.fie.us/kinearx" title="www.fie.us">http://www.fie.us/kinearx</a> [www.fie.us]</p><p>Its all alpha level stuff. But its working well enough for me to move on to other things. I feel its worth open sourcing something that runs, and does useful things. Even if its not feature complete and production tested.</p><p>Its really for developers more than end users for now. But this list is about developing tools. So there you go.</p><p>My theory is simple: This particular part of working with OT cameras is kinda generic and somewhat dull. I'd rather have an open source backend that we can all maintain together, than have to maintain it completely by myself. The windows app is LGPL. The example client, is BSD. Therefore, it should be good for open source and commercial developers who, like me, also want to collaborate on the dull backend and spend more time on other aspects of mocap systems. The projects are separated by a nice simple binary data stream layer to make sure their licenses don't conflict.</p><p>Two main features of interest:</p><p>1. Cameras operating on different computers can be synchronized by looping the sync cable through. The existing Arena and PointCloud tools from NP don't do this on their backend.</p><p>2. Development is jail broken out of the PC environment by the binary protocol. The example script is written in python and runs on OSX and linux, for example.</p><p>Further work to be done:</p><p>1. Better support of OT cameras other than the V100r1. That's the only camera I have, so that's what I know is supported. C120 and V100r2 are something I can't really confirm function of. But I'd like to.</p><p>2. Occasional sending of a GMT timestamp from the 2d servers, interleaved with the frames, for sanity checking purposes and helping with situations where the sync cable may not be working fully.</p><p>3. Switch between the mass marker mode, and COM object mode. This should make the grayscale and masking features of the camera work again (I think being in mass object mode disables them).</p><p>Feel free to e-mail me with questions or queries.</p><p>Brad Friedman<br>VFX - Consultant - Mocap<br><a href="http://www.fie.us/" title="www.fie.us">http://www.fie.us/</a> [www.fie.us]</p></htmltext>
<tokenext>People interested in this area may find the 'Motion Capture ' Yahoo group useful.Its website is located here : http : //movies.groups.yahoo.com/group/motioncapture/ [ yahoo.com ] A recent interesting message from the group ( edited to evade ./ Junk filter ) : ---------- Forwarded message ----------From : Brad Friedman Date : Sun , Nov 15 , 2009 at 9 : 35 AMSubject : [ motioncapture ] releasing some optitrack open source software and codeTo : mocap list Hey all.Been a while .
I 've been rather busy with other things.I 'm releasing some OptiTrack related open source software and code.Two simple modules have been released .
One is LGPL and one is BSD.http : //www.fie.us/kinearx [ www.fie.us ] Its all alpha level stuff .
But its working well enough for me to move on to other things .
I feel its worth open sourcing something that runs , and does useful things .
Even if its not feature complete and production tested.Its really for developers more than end users for now .
But this list is about developing tools .
So there you go.My theory is simple : This particular part of working with OT cameras is kinda generic and somewhat dull .
I 'd rather have an open source backend that we can all maintain together , than have to maintain it completely by myself .
The windows app is LGPL .
The example client , is BSD .
Therefore , it should be good for open source and commercial developers who , like me , also want to collaborate on the dull backend and spend more time on other aspects of mocap systems .
The projects are separated by a nice simple binary data stream layer to make sure their licenses do n't conflict.Two main features of interest : 1 .
Cameras operating on different computers can be synchronized by looping the sync cable through .
The existing Arena and PointCloud tools from NP do n't do this on their backend.2 .
Development is jail broken out of the PC environment by the binary protocol .
The example script is written in python and runs on OSX and linux , for example.Further work to be done : 1 .
Better support of OT cameras other than the V100r1 .
That 's the only camera I have , so that 's what I know is supported .
C120 and V100r2 are something I ca n't really confirm function of .
But I 'd like to.2 .
Occasional sending of a GMT timestamp from the 2d servers , interleaved with the frames , for sanity checking purposes and helping with situations where the sync cable may not be working fully.3 .
Switch between the mass marker mode , and COM object mode .
This should make the grayscale and masking features of the camera work again ( I think being in mass object mode disables them ) .Feel free to e-mail me with questions or queries.Brad FriedmanVFX - Consultant - Mocaphttp : //www.fie.us/ [ www.fie.us ]</tokentext>
<sentencetext>People interested in this area may find the 'Motion Capture' Yahoo group useful.Its website is located here:http://movies.groups.yahoo.com/group/motioncapture/ [yahoo.com]A recent interesting message from the group (edited to evade ./ Junk filter):---------- Forwarded message ----------From: Brad Friedman Date: Sun, Nov 15, 2009 at 9:35 AMSubject: [motioncapture] releasing some optitrack open source software and codeTo: mocap list Hey all.Been a while.
I've been rather busy with other things.I'm releasing some OptiTrack related open source software and code.Two simple modules have been released.
One is LGPL and one is BSD.http://www.fie.us/kinearx [www.fie.us]Its all alpha level stuff.
But its working well enough for me to move on to other things.
I feel its worth open sourcing something that runs, and does useful things.
Even if its not feature complete and production tested.Its really for developers more than end users for now.
But this list is about developing tools.
So there you go.My theory is simple: This particular part of working with OT cameras is kinda generic and somewhat dull.
I'd rather have an open source backend that we can all maintain together, than have to maintain it completely by myself.
The windows app is LGPL.
The example client, is BSD.
Therefore, it should be good for open source and commercial developers who, like me, also want to collaborate on the dull backend and spend more time on other aspects of mocap systems.
The projects are separated by a nice simple binary data stream layer to make sure their licenses don't conflict.Two main features of interest:1.
Cameras operating on different computers can be synchronized by looping the sync cable through.
The existing Arena and PointCloud tools from NP don't do this on their backend.2.
Development is jail broken out of the PC environment by the binary protocol.
The example script is written in python and runs on OSX and linux, for example.Further work to be done:1.
Better support of OT cameras other than the V100r1.
That's the only camera I have, so that's what I know is supported.
C120 and V100r2 are something I can't really confirm function of.
But I'd like to.2.
Occasional sending of a GMT timestamp from the 2d servers, interleaved with the frames, for sanity checking purposes and helping with situations where the sync cable may not be working fully.3.
Switch between the mass marker mode, and COM object mode.
This should make the grayscale and masking features of the camera work again (I think being in mass object mode disables them).Feel free to e-mail me with questions or queries.Brad FriedmanVFX - Consultant - Mocaphttp://www.fie.us/ [www.fie.us]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30254168</id>
	<title>Re:Hello 'Likeness Theft'</title>
	<author>sw155kn1f3</author>
	<datestamp>1259419380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If you're old enough you can remember "The Running Man" starring governator which did exactly that to show his character as having lost the battle.</p></htmltext>
<tokenext>If you 're old enough you can remember " The Running Man " starring governator which did exactly that to show his character as having lost the battle .</tokentext>
<sentencetext>If you're old enough you can remember "The Running Man" starring governator which did exactly that to show his character as having lost the battle.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248320</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30255100</id>
	<title>Re:3D vision for robots</title>
	<author>Anonymous</author>
	<datestamp>1259429880000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>129600 (360 by 360) degrees of space,</p></div><p>You probably mean <a href="http://en.wikipedia.org/wiki/Steradian" title="wikipedia.org" rel="nofollow">4 pi or about 12.56637 steradians</a> [wikipedia.org]</p></div>
	</htmltext>
<tokenext>129600 ( 360 by 360 ) degrees of space,You probably mean 4 pi or about 12.56637 steradians [ wikipedia.org ]</tokentext>
<sentencetext>129600 (360 by 360) degrees of space,You probably mean 4 pi or about 12.56637 steradians [wikipedia.org]
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30250014</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248674</id>
	<title>Re:The Death of Hollywood</title>
	<author>RAMMS+EIN</author>
	<datestamp>1259315100000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>In theory, we can make good computer games, too.</p><p>But how many open-source games can you name that have great graphics? And how many closed-source games with great graphics are there?</p><p>I don't think Hollywood is dying just yet.</p></htmltext>
<tokenext>In theory , we can make good computer games , too.But how many open-source games can you name that have great graphics ?
And how many closed-source games with great graphics are there ? I do n't think Hollywood is dying just yet .</tokentext>
<sentencetext>In theory, we can make good computer games, too.But how many open-source games can you name that have great graphics?
And how many closed-source games with great graphics are there?I don't think Hollywood is dying just yet.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30247998</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249660</id>
	<title>Re:3D vision for robots</title>
	<author>Rabbitbunny</author>
	<datestamp>1259320080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>That's actually how my idea for a WoW bot worked...</p><p>I see a use for this technology.</p></htmltext>
<tokenext>That 's actually how my idea for a WoW bot worked...I see a use for this technology .</tokentext>
<sentencetext>That's actually how my idea for a WoW bot worked...I see a use for this technology.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249188</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248404</id>
	<title>Re:The Death of Hollywood</title>
	<author>Chyeld</author>
	<datestamp>1259313420000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>I take it you aren't used to using Poser or Blender, or any other related 3-d software and thus don't know the joy of: "You STUPID PROGRAM! I just want her to walk down the stairs! Why are her arms doing that! NO! NO! NOO!!!! Stop floating down the stairs and walk! Why is your hair clipping through the wall, why is your hair even moving that way! STOP IT!"</p><p>Hollywood's death knell might be sounding. But it's got a few more good decades in it left before we need to morn for it.</p></htmltext>
<tokenext>I take it you are n't used to using Poser or Blender , or any other related 3-d software and thus do n't know the joy of : " You STUPID PROGRAM !
I just want her to walk down the stairs !
Why are her arms doing that !
NO ! NO !
NOO ! ! ! ! Stop floating down the stairs and walk !
Why is your hair clipping through the wall , why is your hair even moving that way !
STOP IT !
" Hollywood 's death knell might be sounding .
But it 's got a few more good decades in it left before we need to morn for it .</tokentext>
<sentencetext>I take it you aren't used to using Poser or Blender, or any other related 3-d software and thus don't know the joy of: "You STUPID PROGRAM!
I just want her to walk down the stairs!
Why are her arms doing that!
NO! NO!
NOO!!!! Stop floating down the stairs and walk!
Why is your hair clipping through the wall, why is your hair even moving that way!
STOP IT!
"Hollywood's death knell might be sounding.
But it's got a few more good decades in it left before we need to morn for it.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30247998</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248914</id>
	<title>Imperial College have something similar</title>
	<author>Anonymous</author>
	<datestamp>1259316360000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>There's another project in a similar field that researchers at Imperial College have been working on.</p><p>Take a look at http://www.doc.ic.ac.uk/~ajd/ and you'll see what I'm talking about.<br>This technique seems to work in a similar manner, but works based off the motion of the camera... so less limited.</p><p>It's not directly designed for the same purpose, but a couple of years back I spoke to the researchers when we were thinking of making use of the technology for modelling London Underground stations, and with a little development, they were pretty sure it could do the same thing as was being demonstrated.</p></htmltext>
<tokenext>There 's another project in a similar field that researchers at Imperial College have been working on.Take a look at http : //www.doc.ic.ac.uk/ ~ ajd/ and you 'll see what I 'm talking about.This technique seems to work in a similar manner , but works based off the motion of the camera... so less limited.It 's not directly designed for the same purpose , but a couple of years back I spoke to the researchers when we were thinking of making use of the technology for modelling London Underground stations , and with a little development , they were pretty sure it could do the same thing as was being demonstrated .</tokentext>
<sentencetext>There's another project in a similar field that researchers at Imperial College have been working on.Take a look at http://www.doc.ic.ac.uk/~ajd/ and you'll see what I'm talking about.This technique seems to work in a similar manner, but works based off the motion of the camera... so less limited.It's not directly designed for the same purpose, but a couple of years back I spoke to the researchers when we were thinking of making use of the technology for modelling London Underground stations, and with a little development, they were pretty sure it could do the same thing as was being demonstrated.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248418</id>
	<title>Re:The Death of Hollywood</title>
	<author>Requiem18th</author>
	<datestamp>1259313600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Ah but in the near future you will need to get a copyright license to make a picture with a model taken from a real object. Soon you won't be able to make a movie without getting a "RAND" license for every object that appears in your movies.</p></htmltext>
<tokenext>Ah but in the near future you will need to get a copyright license to make a picture with a model taken from a real object .
Soon you wo n't be able to make a movie without getting a " RAND " license for every object that appears in your movies .</tokentext>
<sentencetext>Ah but in the near future you will need to get a copyright license to make a picture with a model taken from a real object.
Soon you won't be able to make a movie without getting a "RAND" license for every object that appears in your movies.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30247998</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249976</id>
	<title>Academic projects versus commercial applications</title>
	<author>Frans Faase</author>
	<datestamp>1259321700000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext>There seems to be a huge gap between these kind of academic projects and the commercial available programs. I have come across several commenrcial applications that can do these kind of things, but these applications cost at least a 1000 dollars or even more. And then there are all these academic projects (going on for at least two decades), which present nice video's and papers, and sometimes release some software. But when you look at the software, you discover that you first have to download nine other package and compile the whole thing and what you get is some kind of script you have to run, with all sorts of command line options. But sofar, I have never found an application with a solid interface on the level of the Gimp or Blender for the matter of the fact. I find this rather strange. I am almost getting the impression that some of the results are sold to the developers of the commercial packages.</htmltext>
<tokenext>There seems to be a huge gap between these kind of academic projects and the commercial available programs .
I have come across several commenrcial applications that can do these kind of things , but these applications cost at least a 1000 dollars or even more .
And then there are all these academic projects ( going on for at least two decades ) , which present nice video 's and papers , and sometimes release some software .
But when you look at the software , you discover that you first have to download nine other package and compile the whole thing and what you get is some kind of script you have to run , with all sorts of command line options .
But sofar , I have never found an application with a solid interface on the level of the Gimp or Blender for the matter of the fact .
I find this rather strange .
I am almost getting the impression that some of the results are sold to the developers of the commercial packages .</tokentext>
<sentencetext>There seems to be a huge gap between these kind of academic projects and the commercial available programs.
I have come across several commenrcial applications that can do these kind of things, but these applications cost at least a 1000 dollars or even more.
And then there are all these academic projects (going on for at least two decades), which present nice video's and papers, and sometimes release some software.
But when you look at the software, you discover that you first have to download nine other package and compile the whole thing and what you get is some kind of script you have to run, with all sorts of command line options.
But sofar, I have never found an application with a solid interface on the level of the Gimp or Blender for the matter of the fact.
I find this rather strange.
I am almost getting the impression that some of the results are sold to the developers of the commercial packages.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249788</id>
	<title>Re:The Death of Hollywood</title>
	<author>Anonymous</author>
	<datestamp>1259320860000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><p>Troll?!  Looks like Jack Valenti got some mod points today.</p></htmltext>
<tokenext>Troll ? !
Looks like Jack Valenti got some mod points today .</tokentext>
<sentencetext>Troll?!
Looks like Jack Valenti got some mod points today.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30247998</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248098</id>
	<title>Re:The Death of Hollywood</title>
	<author>Anonymous</author>
	<datestamp>1259354400000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Have you seen an action move lately?  You need way less than a million monkeys.</p></htmltext>
<tokenext>Have you seen an action move lately ?
You need way less than a million monkeys .</tokentext>
<sentencetext>Have you seen an action move lately?
You need way less than a million monkeys.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30247998</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30252992</id>
	<title>Simultaneous location and mapping</title>
	<author>Anonymous</author>
	<datestamp>1259438880000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>
That's called "simultaneous location and mapping", and in the last five years, good algorithms have been developed and quite a few systems are more or less working.  Search for "Visual SLAM".
</p><p>
The Samsung Hauzen vacuum cleaner uses Visual SLAM.  There's a <a href="http://www.youtube.com/watch?v=bq5HZzGF3vQ" title="youtube.com">video.</a> [youtube.com]  This is way ahead of the blundering Roomba.</p></htmltext>
<tokenext>That 's called " simultaneous location and mapping " , and in the last five years , good algorithms have been developed and quite a few systems are more or less working .
Search for " Visual SLAM " .
The Samsung Hauzen vacuum cleaner uses Visual SLAM .
There 's a video .
[ youtube.com ] This is way ahead of the blundering Roomba .</tokentext>
<sentencetext>
That's called "simultaneous location and mapping", and in the last five years, good algorithms have been developed and quite a few systems are more or less working.
Search for "Visual SLAM".
The Samsung Hauzen vacuum cleaner uses Visual SLAM.
There's a video.
[youtube.com]  This is way ahead of the blundering Roomba.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249188</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249282</id>
	<title>Re:Any open source software?</title>
	<author>Anonymous</author>
	<datestamp>1259318400000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Bundler http://phototour.cs.washington.edu/bundler/<br>It's GPL and is quite amazing when you get it to work. Use it in combination with SIFT and PMVS2 - although you can use other feature points and dense matchers, but they will require much extra work.</p><p>If the guy doesn't release his source code then don't really bother rewriting it, the whole field of 3D is more or less exploding right now and there is more than enough GPL'ed code out there. Decyphering a paper and trying to do it yourself takes a lot of time and usually doesn't work too well.</p></htmltext>
<tokenext>Bundler http : //phototour.cs.washington.edu/bundler/It 's GPL and is quite amazing when you get it to work .
Use it in combination with SIFT and PMVS2 - although you can use other feature points and dense matchers , but they will require much extra work.If the guy does n't release his source code then do n't really bother rewriting it , the whole field of 3D is more or less exploding right now and there is more than enough GPL'ed code out there .
Decyphering a paper and trying to do it yourself takes a lot of time and usually does n't work too well .</tokentext>
<sentencetext>Bundler http://phototour.cs.washington.edu/bundler/It's GPL and is quite amazing when you get it to work.
Use it in combination with SIFT and PMVS2 - although you can use other feature points and dense matchers, but they will require much extra work.If the guy doesn't release his source code then don't really bother rewriting it, the whole field of 3D is more or less exploding right now and there is more than enough GPL'ed code out there.
Decyphering a paper and trying to do it yourself takes a lot of time and usually doesn't work too well.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248094</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248028</id>
	<title>Amazing new mathematics research</title>
	<author>For a Free Internet</author>
	<datestamp>1259353860000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>We are beginning to see the applied math fruits of the amazing advances in lambda-matrix algebra that have taken place in the last decade. Most people outside of the academy aren't aware but we are living through another golden age in Math. First, the proof of Krebik's theorem in 1992. Then, building on that, the multi-computational vectorization of Gauss entities (this was done on a Beowulf cluster running Slackware!) And finally in 1995, Chen cracked the higher-order Piezo series conundrum (nicknamed the "Pac-Man Problem") Soon everything will be three-dimensional, thanks to these stunning innovations.</p></htmltext>
<tokenext>We are beginning to see the applied math fruits of the amazing advances in lambda-matrix algebra that have taken place in the last decade .
Most people outside of the academy are n't aware but we are living through another golden age in Math .
First , the proof of Krebik 's theorem in 1992 .
Then , building on that , the multi-computational vectorization of Gauss entities ( this was done on a Beowulf cluster running Slackware !
) And finally in 1995 , Chen cracked the higher-order Piezo series conundrum ( nicknamed the " Pac-Man Problem " ) Soon everything will be three-dimensional , thanks to these stunning innovations .</tokentext>
<sentencetext>We are beginning to see the applied math fruits of the amazing advances in lambda-matrix algebra that have taken place in the last decade.
Most people outside of the academy aren't aware but we are living through another golden age in Math.
First, the proof of Krebik's theorem in 1992.
Then, building on that, the multi-computational vectorization of Gauss entities (this was done on a Beowulf cluster running Slackware!
) And finally in 1995, Chen cracked the higher-order Piezo series conundrum (nicknamed the "Pac-Man Problem") Soon everything will be three-dimensional, thanks to these stunning innovations.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249188</id>
	<title>3D vision for robots</title>
	<author>cptnapalm</author>
	<datestamp>1259317860000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>I was thinking about robots one day and I was wondering why those who work on computer vision didn't do something like this.  Instead of trying to get the machine to understand the analog world, why wouldn't it be better for the machine to have an internal representation of the world by making a 3d map?  Quake 3 CoffeeShop, if you will.</p><p>The idea I had was that the vision system creates a 3d map with entities, mapped from the vision system as well, inside.  The AI works within the 3d representation of the world.  If the AI wants to move from A to B, it signals the body controlling subsystem to start walking.  When the 3d representation, being informed by the vision system, tells the AI that it is at point B, then the AI signals to stop walking.</p><p>Hardware constraints not withstanding, is this model any good?</p><p>I'm just a lowly, early middle aged novice C programmer who has never actually done anything with robotics, so if what I said made no sense or is obviously idiotic, I do understand that my ideas are comin' outta my ass.</p></htmltext>
<tokenext>I was thinking about robots one day and I was wondering why those who work on computer vision did n't do something like this .
Instead of trying to get the machine to understand the analog world , why would n't it be better for the machine to have an internal representation of the world by making a 3d map ?
Quake 3 CoffeeShop , if you will.The idea I had was that the vision system creates a 3d map with entities , mapped from the vision system as well , inside .
The AI works within the 3d representation of the world .
If the AI wants to move from A to B , it signals the body controlling subsystem to start walking .
When the 3d representation , being informed by the vision system , tells the AI that it is at point B , then the AI signals to stop walking.Hardware constraints not withstanding , is this model any good ? I 'm just a lowly , early middle aged novice C programmer who has never actually done anything with robotics , so if what I said made no sense or is obviously idiotic , I do understand that my ideas are comin ' outta my ass .</tokentext>
<sentencetext>I was thinking about robots one day and I was wondering why those who work on computer vision didn't do something like this.
Instead of trying to get the machine to understand the analog world, why wouldn't it be better for the machine to have an internal representation of the world by making a 3d map?
Quake 3 CoffeeShop, if you will.The idea I had was that the vision system creates a 3d map with entities, mapped from the vision system as well, inside.
The AI works within the 3d representation of the world.
If the AI wants to move from A to B, it signals the body controlling subsystem to start walking.
When the 3d representation, being informed by the vision system, tells the AI that it is at point B, then the AI signals to stop walking.Hardware constraints not withstanding, is this model any good?I'm just a lowly, early middle aged novice C programmer who has never actually done anything with robotics, so if what I said made no sense or is obviously idiotic, I do understand that my ideas are comin' outta my ass.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249806</id>
	<title>Re:3D vision for robots</title>
	<author>Anonymous</author>
	<datestamp>1259320920000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The problem is noise. This is the single most annoying issue in this type of a computer vision problem. I've written an entire thesis on reducing noise in stereo images and the results aren't very pretty. (I however used a specialized stereo camera instead of the commodity hardware).</p><p>I did not find many intricate details about this particular article (I guess I didn't look hard enough) but I'll be very interested to know how well this works in normal real-world situations instead of a "background blue-screen and one object moving in the foreground" situation. My guess is not well.</p></htmltext>
<tokenext>The problem is noise .
This is the single most annoying issue in this type of a computer vision problem .
I 've written an entire thesis on reducing noise in stereo images and the results are n't very pretty .
( I however used a specialized stereo camera instead of the commodity hardware ) .I did not find many intricate details about this particular article ( I guess I did n't look hard enough ) but I 'll be very interested to know how well this works in normal real-world situations instead of a " background blue-screen and one object moving in the foreground " situation .
My guess is not well .</tokentext>
<sentencetext>The problem is noise.
This is the single most annoying issue in this type of a computer vision problem.
I've written an entire thesis on reducing noise in stereo images and the results aren't very pretty.
(I however used a specialized stereo camera instead of the commodity hardware).I did not find many intricate details about this particular article (I guess I didn't look hard enough) but I'll be very interested to know how well this works in normal real-world situations instead of a "background blue-screen and one object moving in the foreground" situation.
My guess is not well.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249188</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30250028</id>
	<title>Re:3D vision for robots</title>
	<author>eggnoglatte</author>
	<datestamp>1259321940000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>The idea is not stupid, but it also <a href="http://en.wikipedia.org/wiki/Simultaneous\_localization\_and\_mapping" title="wikipedia.org">isn't new</a> [wikipedia.org]. It is just turns out to be a little harder to get working in practice than you might think..</p></htmltext>
<tokenext>The idea is not stupid , but it also is n't new [ wikipedia.org ] .
It is just turns out to be a little harder to get working in practice than you might think. .</tokentext>
<sentencetext>The idea is not stupid, but it also isn't new [wikipedia.org].
It is just turns out to be a little harder to get working in practice than you might think..</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249188</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30251326</id>
	<title>Re:3D vision for robots</title>
	<author>nghiaho12</author>
	<datestamp>1259329620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I actually did this for my thesis. I built a quake like 3D model of the environment using an expensive laser range scanner. The robot has enough geometric and photometric information it needs to perform localisation and path planning. This type of problem where you have the map before hand is generally called "global localisation" and is easier than the Simultanous Localisation and Mapping (SLAM) problem, where you don't have a map prior.</htmltext>
<tokenext>I actually did this for my thesis .
I built a quake like 3D model of the environment using an expensive laser range scanner .
The robot has enough geometric and photometric information it needs to perform localisation and path planning .
This type of problem where you have the map before hand is generally called " global localisation " and is easier than the Simultanous Localisation and Mapping ( SLAM ) problem , where you do n't have a map prior .</tokentext>
<sentencetext>I actually did this for my thesis.
I built a quake like 3D model of the environment using an expensive laser range scanner.
The robot has enough geometric and photometric information it needs to perform localisation and path planning.
This type of problem where you have the map before hand is generally called "global localisation" and is easier than the Simultanous Localisation and Mapping (SLAM) problem, where you don't have a map prior.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249188</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30247964</id>
	<title>fp</title>
	<author>Anonymous</author>
	<datestamp>1259353500000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext>just dropped barack obama off at the pool!</htmltext>
<tokenext>just dropped barack obama off at the pool !</tokentext>
<sentencetext>just dropped barack obama off at the pool!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248082</id>
	<title>Download link?</title>
	<author>Anonymous</author>
	<datestamp>1259354340000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Is it just me or is there no download link?</p></htmltext>
<tokenext>Is it just me or is there no download link ?</tokentext>
<sentencetext>Is it just me or is there no download link?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30251642</id>
	<title>Re:The Death of Hollywood</title>
	<author>vikstar</author>
	<datestamp>1259332560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>That's because Hollywood is just crap rendered in good graphics. There are international studios that produce high quality cinema which will live on due to the content, and not a glossy wrapper.</p></htmltext>
<tokenext>That 's because Hollywood is just crap rendered in good graphics .
There are international studios that produce high quality cinema which will live on due to the content , and not a glossy wrapper .</tokentext>
<sentencetext>That's because Hollywood is just crap rendered in good graphics.
There are international studios that produce high quality cinema which will live on due to the content, and not a glossy wrapper.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30247998</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248390</id>
	<title>Goat saves family</title>
	<author>Anonymous</author>
	<datestamp>1259313360000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>flame me for being off-topic if you like...  What's with the kid and the goat in the ad at the top of<nobr> <wbr></nobr>/. today? He's an "orphan" who says the goat saved the family - not his family, it would seem - - cause he's an orphan.  Non-profiteering rears its ugly head... Not a nickel!</p></htmltext>
<tokenext>flame me for being off-topic if you like... What 's with the kid and the goat in the ad at the top of / .
today ? He 's an " orphan " who says the goat saved the family - not his family , it would seem - - cause he 's an orphan .
Non-profiteering rears its ugly head... Not a nickel !</tokentext>
<sentencetext>flame me for being off-topic if you like...  What's with the kid and the goat in the ad at the top of /.
today? He's an "orphan" who says the goat saved the family - not his family, it would seem - - cause he's an orphan.
Non-profiteering rears its ugly head... Not a nickel!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30250200</id>
	<title>Re:3D vision for robots</title>
	<author>hitmark</author>
	<datestamp>1259322960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>thanks for reminding me, as i think i read that research have shown that this is how we humans operate.</p><p>as in, we build up a internal model of ourselves and whats around us, and keep refining said model constantly.</p><p>thats why we develop habits and preferences, as that means the models do not have to change.</p><p>this includes our own body btw, and is the probable cause of phantom limb experiences.</p></htmltext>
<tokenext>thanks for reminding me , as i think i read that research have shown that this is how we humans operate.as in , we build up a internal model of ourselves and whats around us , and keep refining said model constantly.thats why we develop habits and preferences , as that means the models do not have to change.this includes our own body btw , and is the probable cause of phantom limb experiences .</tokentext>
<sentencetext>thanks for reminding me, as i think i read that research have shown that this is how we humans operate.as in, we build up a internal model of ourselves and whats around us, and keep refining said model constantly.thats why we develop habits and preferences, as that means the models do not have to change.this includes our own body btw, and is the probable cause of phantom limb experiences.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249188</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248128</id>
	<title>yawn, two decades ago called</title>
	<author>Anonymous</author>
	<datestamp>1259354640000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>And this is why no-one with any ability chooses Computing at Cambridge.</p><p>On a plus note, MS swipes most of them up anyway, leaving the real engineering colleges (MIT, Imperial, etc.) to produce engineers.</p><p>Meanwhile, those who want the Oxbridge badge study Mathematics. Which is still a more worthy challenge than Engineering at MIT *ducks*.</p></htmltext>
<tokenext>And this is why no-one with any ability chooses Computing at Cambridge.On a plus note , MS swipes most of them up anyway , leaving the real engineering colleges ( MIT , Imperial , etc .
) to produce engineers.Meanwhile , those who want the Oxbridge badge study Mathematics .
Which is still a more worthy challenge than Engineering at MIT * ducks * .</tokentext>
<sentencetext>And this is why no-one with any ability chooses Computing at Cambridge.On a plus note, MS swipes most of them up anyway, leaving the real engineering colleges (MIT, Imperial, etc.
) to produce engineers.Meanwhile, those who want the Oxbridge badge study Mathematics.
Which is still a more worthy challenge than Engineering at MIT *ducks*.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30250014</id>
	<title>Re:3D vision for robots</title>
	<author>Monkeedude1212</author>
	<datestamp>1259321880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>There are a few kinks to work out with this model, but essentially, it could work. Specifically the example they showed would model an object, and not the world around it. So the algorithm would have to be reworked to map the world, and like someone else mentioned, would probably use 2 cameras (which means its compiling twice the data, which means "Real time" might not be so fast)</p><p>There is also the major issue with dynamic lighting effects. Since lighting is the primary use of how to model something in 3D, you generally just move the object (or yourself through the world) and you assume that the light source is always stationary. Or, for more accuracy, you can move the light source across 129600 (360 by 360) degrees of space, and knowing the lights path, get a more accurate understanding of the depth and shape of an object or world. Now movement and dynamic lighting combined would give you a far more accurate object then just one or the other. (In fact, if you or the object don't move, you'll only be getting half of what is there, essentially the front side of the object or only what is sitting in front of you in your world)</p><p>Now, the biggest kink to that would be a dynamic light source where the path isn't predicted. Example, the Robot is walking along, and a Car drives by. The altered light on the ground could mean the robot percieves a bump in the road, when in fact, there is not.</p><p>I hope that made sense.</p></htmltext>
<tokenext>There are a few kinks to work out with this model , but essentially , it could work .
Specifically the example they showed would model an object , and not the world around it .
So the algorithm would have to be reworked to map the world , and like someone else mentioned , would probably use 2 cameras ( which means its compiling twice the data , which means " Real time " might not be so fast ) There is also the major issue with dynamic lighting effects .
Since lighting is the primary use of how to model something in 3D , you generally just move the object ( or yourself through the world ) and you assume that the light source is always stationary .
Or , for more accuracy , you can move the light source across 129600 ( 360 by 360 ) degrees of space , and knowing the lights path , get a more accurate understanding of the depth and shape of an object or world .
Now movement and dynamic lighting combined would give you a far more accurate object then just one or the other .
( In fact , if you or the object do n't move , you 'll only be getting half of what is there , essentially the front side of the object or only what is sitting in front of you in your world ) Now , the biggest kink to that would be a dynamic light source where the path is n't predicted .
Example , the Robot is walking along , and a Car drives by .
The altered light on the ground could mean the robot percieves a bump in the road , when in fact , there is not.I hope that made sense .</tokentext>
<sentencetext>There are a few kinks to work out with this model, but essentially, it could work.
Specifically the example they showed would model an object, and not the world around it.
So the algorithm would have to be reworked to map the world, and like someone else mentioned, would probably use 2 cameras (which means its compiling twice the data, which means "Real time" might not be so fast)There is also the major issue with dynamic lighting effects.
Since lighting is the primary use of how to model something in 3D, you generally just move the object (or yourself through the world) and you assume that the light source is always stationary.
Or, for more accuracy, you can move the light source across 129600 (360 by 360) degrees of space, and knowing the lights path, get a more accurate understanding of the depth and shape of an object or world.
Now movement and dynamic lighting combined would give you a far more accurate object then just one or the other.
(In fact, if you or the object don't move, you'll only be getting half of what is there, essentially the front side of the object or only what is sitting in front of you in your world)Now, the biggest kink to that would be a dynamic light source where the path isn't predicted.
Example, the Robot is walking along, and a Car drives by.
The altered light on the ground could mean the robot percieves a bump in the road, when in fact, there is not.I hope that made sense.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249188</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248910</id>
	<title>Re:The Death of Hollywood</title>
	<author>Anonymous</author>
	<datestamp>1259316360000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I develop video games and make short films for fun. It is painfully obvious that you never attempted at making any sort of short film or (3d) video game, yet you do not seem to understand how incompetent you are in the area.<br>I'm going to go cry now. That's how bad your post was.</p></htmltext>
<tokenext>I develop video games and make short films for fun .
It is painfully obvious that you never attempted at making any sort of short film or ( 3d ) video game , yet you do not seem to understand how incompetent you are in the area.I 'm going to go cry now .
That 's how bad your post was .</tokentext>
<sentencetext>I develop video games and make short films for fun.
It is painfully obvious that you never attempted at making any sort of short film or (3d) video game, yet you do not seem to understand how incompetent you are in the area.I'm going to go cry now.
That's how bad your post was.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30247998</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249588</id>
	<title>Re:3D vision for robots</title>
	<author>Anonymous</author>
	<datestamp>1259319720000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>&gt; is this model any good?</p><p>Yes, it's the only workable model.. but I am Anonymous Coward.</p></htmltext>
<tokenext>&gt; is this model any good ? Yes , it 's the only workable model.. but I am Anonymous Coward .</tokentext>
<sentencetext>&gt; is this model any good?Yes, it's the only workable model.. but I am Anonymous Coward.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249188</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30250952</id>
	<title>Re:Academic projects versus commercial application</title>
	<author>mds820</author>
	<datestamp>1259327280000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Academia takes care of the R in R&amp;D.  Not necessarily the D.</htmltext>
<tokenext>Academia takes care of the R in R&amp;D .
Not necessarily the D .</tokentext>
<sentencetext>Academia takes care of the R in R&amp;D.
Not necessarily the D.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249976</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30253048</id>
	<title>Re:Academic projects versus commercial application</title>
	<author>laddiebuck</author>
	<datestamp>1259440020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Sadly anything useful from academia (not implying that this is) is spun off into private companies, 9 times out of 10. This despite the fact that it's mostly developed on public money.</htmltext>
<tokenext>Sadly anything useful from academia ( not implying that this is ) is spun off into private companies , 9 times out of 10 .
This despite the fact that it 's mostly developed on public money .</tokentext>
<sentencetext>Sadly anything useful from academia (not implying that this is) is spun off into private companies, 9 times out of 10.
This despite the fact that it's mostly developed on public money.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249976</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30270968</id>
	<title>Re:Academic projects versus commercial application</title>
	<author>Geminii</author>
	<datestamp>1259602620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>If you are not good enough to be using CAD software professionally than you are 99\% of the new market this thing just created.</htmltext>
<tokenext>If you are not good enough to be using CAD software professionally than you are 99 \ % of the new market this thing just created .</tokentext>
<sentencetext>If you are not good enough to be using CAD software professionally than you are 99\% of the new market this thing just created.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30251118</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248544</id>
	<title>Pron?</title>
	<author>Anonymous</author>
	<datestamp>1259314320000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I rest my case.</p></htmltext>
<tokenext>I rest my case .</tokentext>
<sentencetext>I rest my case.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30251478</id>
	<title>This is not new</title>
	<author>mapuche</author>
	<datestamp>1259331180000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>I haven't read the article yet, but there's already a program doing this with cheap cameras, version 1.0 was free:</p><p>http://www.david-laserscanner.com/</p></htmltext>
<tokenext>I have n't read the article yet , but there 's already a program doing this with cheap cameras , version 1.0 was free : http : //www.david-laserscanner.com/</tokentext>
<sentencetext>I haven't read the article yet, but there's already a program doing this with cheap cameras, version 1.0 was free:http://www.david-laserscanner.com/</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248094</id>
	<title>Any open source software?</title>
	<author>gnalle</author>
	<datestamp>1259354400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I s there any open source software that can generate  a 3D model from photos?

As far as I can see the source code of proforma is closed.
<a href="http://mi.eng.cam.ac.uk/~qp202/my\_papers/BMVC09/" title="cam.ac.uk">http://mi.eng.cam.ac.uk/~qp202/my\_papers/BMVC09/</a> [cam.ac.uk]</htmltext>
<tokenext>I s there any open source software that can generate a 3D model from photos ?
As far as I can see the source code of proforma is closed .
http : //mi.eng.cam.ac.uk/ ~ qp202/my \ _papers/BMVC09/ [ cam.ac.uk ]</tokentext>
<sentencetext>I s there any open source software that can generate  a 3D model from photos?
As far as I can see the source code of proforma is closed.
http://mi.eng.cam.ac.uk/~qp202/my\_papers/BMVC09/ [cam.ac.uk]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30252382</id>
	<title>Volker Blanz and his 3D Celebrities</title>
	<author>MrSteveSD</author>
	<datestamp>1259342040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>There seems to be a huge gap between these kind of academic projects and the commercial available programs.</p></div><p>
Indeed. Check out the work of <a href="http://www.mpi-inf.mpg.de/~blanz/" title="mpi-inf.mpg.de">Volker Blanz</a> [mpi-inf.mpg.de] . He was producing amazing 3D models of celebrities from photographs a decade ago. Yet you can't get anything remotely that good today (I've used Facegen etc).</p></div>
	</htmltext>
<tokenext>There seems to be a huge gap between these kind of academic projects and the commercial available programs .
Indeed. Check out the work of Volker Blanz [ mpi-inf.mpg.de ] .
He was producing amazing 3D models of celebrities from photographs a decade ago .
Yet you ca n't get anything remotely that good today ( I 've used Facegen etc ) .</tokentext>
<sentencetext>There seems to be a huge gap between these kind of academic projects and the commercial available programs.
Indeed. Check out the work of Volker Blanz [mpi-inf.mpg.de] .
He was producing amazing 3D models of celebrities from photographs a decade ago.
Yet you can't get anything remotely that good today (I've used Facegen etc).
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249976</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248102</id>
	<title>Wait..wait... on the FLY with a WEBcam?</title>
	<author>Anonymous</author>
	<datestamp>1259354400000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext><p>Did the Slashdot contributor discover TFA with a spider?</p></htmltext>
<tokenext>Did the Slashdot contributor discover TFA with a spider ?</tokentext>
<sentencetext>Did the Slashdot contributor discover TFA with a spider?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249382</id>
	<title>Re:The Death of Hollywood</title>
	<author>westlake</author>
	<datestamp>1259318760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>With open-source rendering images already well established...that only leaves the content areas under-developed.<br>Throw in some voice-synthesis software, some directing software, and a million monkeys hammering away at plots then Hollywood as an institution is dead.</i> </p><p>The geek needs a million monkeys.</p><p>Hollywood gets by with a handful of men men like John Lasseter, Andrew Stanton and Brad Bird. In sound design, a Ben Burtt.</p><p>Digitizing the prop is trivial.</p><p> Knowing which prop to use - and how to use it is not.</p><p>There are around 400 individually designed objects in the "bubble wrap" scene in Wall-E.</p><p>For 10\% of your final grade, your mission is to explain how these props are used to create a mood, comic or tragic, advance the story, reveal character.</p></htmltext>
<tokenext>With open-source rendering images already well established...that only leaves the content areas under-developed.Throw in some voice-synthesis software , some directing software , and a million monkeys hammering away at plots then Hollywood as an institution is dead .
The geek needs a million monkeys.Hollywood gets by with a handful of men men like John Lasseter , Andrew Stanton and Brad Bird .
In sound design , a Ben Burtt.Digitizing the prop is trivial .
Knowing which prop to use - and how to use it is not.There are around 400 individually designed objects in the " bubble wrap " scene in Wall-E.For 10 \ % of your final grade , your mission is to explain how these props are used to create a mood , comic or tragic , advance the story , reveal character .</tokentext>
<sentencetext>With open-source rendering images already well established...that only leaves the content areas under-developed.Throw in some voice-synthesis software, some directing software, and a million monkeys hammering away at plots then Hollywood as an institution is dead.
The geek needs a million monkeys.Hollywood gets by with a handful of men men like John Lasseter, Andrew Stanton and Brad Bird.
In sound design, a Ben Burtt.Digitizing the prop is trivial.
Knowing which prop to use - and how to use it is not.There are around 400 individually designed objects in the "bubble wrap" scene in Wall-E.For 10\% of your final grade, your mission is to explain how these props are used to create a mood, comic or tragic, advance the story, reveal character.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30247998</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30250998</id>
	<title>I have seen this before</title>
	<author>Anonymous</author>
	<datestamp>1259327520000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>In the 2002 Science Fair at UGA, I swear this exact same project was there and won overall.</p></htmltext>
<tokenext>In the 2002 Science Fair at UGA , I swear this exact same project was there and won overall .</tokentext>
<sentencetext>In the 2002 Science Fair at UGA, I swear this exact same project was there and won overall.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248150</id>
	<title>Re:The Death of Hollywood</title>
	<author>garg0yle</author>
	<datestamp>1259354820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>There's one more key we're missing - the ability to render humans realistically.  We can manage just about everything else, but until we can make a virtual John Wayne that looks like John Wayne and not a wax mannequin, we're not going to see Hollywood abandon "talent".</p><p>Of course, once we can do so, the next step will be to "improve" the stars - start with a virtual Natalie Portman (for example), and then "tweak" her for further fanboy appeal.</p></htmltext>
<tokenext>There 's one more key we 're missing - the ability to render humans realistically .
We can manage just about everything else , but until we can make a virtual John Wayne that looks like John Wayne and not a wax mannequin , we 're not going to see Hollywood abandon " talent " .Of course , once we can do so , the next step will be to " improve " the stars - start with a virtual Natalie Portman ( for example ) , and then " tweak " her for further fanboy appeal .</tokentext>
<sentencetext>There's one more key we're missing - the ability to render humans realistically.
We can manage just about everything else, but until we can make a virtual John Wayne that looks like John Wayne and not a wax mannequin, we're not going to see Hollywood abandon "talent".Of course, once we can do so, the next step will be to "improve" the stars - start with a virtual Natalie Portman (for example), and then "tweak" her for further fanboy appeal.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30247998</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30253630</id>
	<title>Re:Academic projects versus commercial application</title>
	<author>HigH5</author>
	<datestamp>1259409540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>"Genius is one percent inspiration and 99 percent perspiration." Thomas Edison
<br>
I think that the 99\% percent is often the problem with these projects. You come up with something, make a proof of concept, but it takes a lot more work to perfect it.</htmltext>
<tokenext>" Genius is one percent inspiration and 99 percent perspiration .
" Thomas Edison I think that the 99 \ % percent is often the problem with these projects .
You come up with something , make a proof of concept , but it takes a lot more work to perfect it .</tokentext>
<sentencetext>"Genius is one percent inspiration and 99 percent perspiration.
" Thomas Edison

I think that the 99\% percent is often the problem with these projects.
You come up with something, make a proof of concept, but it takes a lot more work to perfect it.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30251118</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_27_1851243_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248150
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30247998
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_27_1851243_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30250028
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249188
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_27_1851243_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248976
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248350
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_27_1851243_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249282
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248094
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_27_1851243_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30250952
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249976
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_27_1851243_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30251326
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249188
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_27_1851243_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248098
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30247998
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_27_1851243_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30250200
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249188
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_27_1851243_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248418
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30247998
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_27_1851243_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30252382
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249976
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_27_1851243_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249788
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30247998
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_27_1851243_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249588
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249188
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_27_1851243_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30253462
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248404
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30247998
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_27_1851243_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249806
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249188
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_27_1851243_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249382
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30247998
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_27_1851243_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30253630
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30251118
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249976
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_27_1851243_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30251642
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30247998
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_27_1851243_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30254434
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248320
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_27_1851243_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249660
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249188
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_27_1851243_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30255100
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30250014
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249188
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_27_1851243_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30270968
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30251118
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249976
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_27_1851243_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248910
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30247998
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_27_1851243_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30252992
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249188
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_27_1851243_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30253048
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249976
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_27_1851243_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248674
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30247998
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_27_1851243_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30254168
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248320
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_27_1851243.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248094
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249282
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_27_1851243.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30247998
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30251642
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249382
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248418
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248674
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248150
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248910
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249788
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248098
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248404
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30253462
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_27_1851243.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30247964
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_27_1851243.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248350
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248976
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_27_1851243.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248544
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_27_1851243.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249976
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30250952
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30251118
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30253630
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30270968
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30253048
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30252382
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_27_1851243.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248102
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_27_1851243.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248028
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_27_1851243.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30251478
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_27_1851243.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249188
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249588
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249660
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30249806
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30250028
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30251326
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30252992
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30250200
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30250014
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30255100
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_27_1851243.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248320
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30254434
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30254168
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_27_1851243.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_27_1851243.30248082
</commentlist>
</conversation>
