<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_11_15_2011203</id>
	<title>Nvidia's RealityServer 3.0 Demonstrated</title>
	<author>kdawson</author>
	<datestamp>1258302960000</datestamp>
	<htmltext>robotsrule writes <i>"As we discussed last month, <a href="//tech.slashdot.org/story/09/10/21/1354224/NVIDIA-Targeting-Real-Time-Cloud-Rendering">RealityServer 3.0</a> is Nvidia's attempt to bring photo-realistic 3D images to any Internet-connected device, including the likes of Android and iPhone. RealityServer 3.0 pushes the CPU-killing 3D rendering process to a high-power, GPU based, back-end server farm based on Nvidia's Tesla or Quadro architectures. The resulting images are then streamed back to the client device in seconds; such images would normally take hours to compute even on a high-end unassisted workstation. Extreme Tech has up an article containing an <a href="http://www.extremetech.com/print\_article2/0,1217,a\%253D245901,00.asp">interview with product managers from Nvidia and Mental Images</a>, whose <em>iray</em> application is employed in a <a href="http://www.youtube.com/watch?v=Q-I58PPMPfs">two-minute video demonstration</a> of near-real-time ray-traced rendering."</i> Once  you get to the Extreme Tech site, going to the printable version will help to preserve sanity.</htmltext>
<tokenext>robotsrule writes " As we discussed last month , RealityServer 3.0 is Nvidia 's attempt to bring photo-realistic 3D images to any Internet-connected device , including the likes of Android and iPhone .
RealityServer 3.0 pushes the CPU-killing 3D rendering process to a high-power , GPU based , back-end server farm based on Nvidia 's Tesla or Quadro architectures .
The resulting images are then streamed back to the client device in seconds ; such images would normally take hours to compute even on a high-end unassisted workstation .
Extreme Tech has up an article containing an interview with product managers from Nvidia and Mental Images , whose iray application is employed in a two-minute video demonstration of near-real-time ray-traced rendering .
" Once you get to the Extreme Tech site , going to the printable version will help to preserve sanity .</tokentext>
<sentencetext>robotsrule writes "As we discussed last month, RealityServer 3.0 is Nvidia's attempt to bring photo-realistic 3D images to any Internet-connected device, including the likes of Android and iPhone.
RealityServer 3.0 pushes the CPU-killing 3D rendering process to a high-power, GPU based, back-end server farm based on Nvidia's Tesla or Quadro architectures.
The resulting images are then streamed back to the client device in seconds; such images would normally take hours to compute even on a high-end unassisted workstation.
Extreme Tech has up an article containing an interview with product managers from Nvidia and Mental Images, whose iray application is employed in a two-minute video demonstration of near-real-time ray-traced rendering.
" Once  you get to the Extreme Tech site, going to the printable version will help to preserve sanity.</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30128440</id>
	<title>Suggested this years ago</title>
	<author>mattr</author>
	<datestamp>1258471200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>FWIW I suggested rendering and compositing multiple video streams into a single one then download to a local mobile terminal a number of years ago. I guess you just wait until you get good enough hardware and then when you hit the sweet spot everything just materializes.</p></htmltext>
<tokenext>FWIW I suggested rendering and compositing multiple video streams into a single one then download to a local mobile terminal a number of years ago .
I guess you just wait until you get good enough hardware and then when you hit the sweet spot everything just materializes .</tokentext>
<sentencetext>FWIW I suggested rendering and compositing multiple video streams into a single one then download to a local mobile terminal a number of years ago.
I guess you just wait until you get good enough hardware and then when you hit the sweet spot everything just materializes.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30123608</id>
	<title>Re:Who cares</title>
	<author>Virtual\_Raider</author>
	<datestamp>1258374540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>NVidia make shit, their drivers are horrible.</p></div><p>That wouldn't be our problem, since they are going to do the rendering on their own hardware somewhere in Bespin, ainnit?</p><p>I think a smashing app for this would be some sort of World of Warcrack or similar game where they could have a combo of locally rendered/remotely served graphics. Some of the newer high-end mobiles have pretty strong GPUs considering, perhaps use them to render characters and the fast/dynamically changing content and serve backgrounds and foes from this reality server. There may be hundreds of special-purpose apps that could take advantage of this, but I struggle to come up with a mass-scenario other than gaming and augmented-reality tho.</p></div>
	</htmltext>
<tokenext>NVidia make shit , their drivers are horrible.That would n't be our problem , since they are going to do the rendering on their own hardware somewhere in Bespin , ainnit ? I think a smashing app for this would be some sort of World of Warcrack or similar game where they could have a combo of locally rendered/remotely served graphics .
Some of the newer high-end mobiles have pretty strong GPUs considering , perhaps use them to render characters and the fast/dynamically changing content and serve backgrounds and foes from this reality server .
There may be hundreds of special-purpose apps that could take advantage of this , but I struggle to come up with a mass-scenario other than gaming and augmented-reality tho .</tokentext>
<sentencetext>NVidia make shit, their drivers are horrible.That wouldn't be our problem, since they are going to do the rendering on their own hardware somewhere in Bespin, ainnit?I think a smashing app for this would be some sort of World of Warcrack or similar game where they could have a combo of locally rendered/remotely served graphics.
Some of the newer high-end mobiles have pretty strong GPUs considering, perhaps use them to render characters and the fast/dynamically changing content and serve backgrounds and foes from this reality server.
There may be hundreds of special-purpose apps that could take advantage of this, but I struggle to come up with a mass-scenario other than gaming and augmented-reality tho.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112006</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30113098</id>
	<title>Re:One question: Why?</title>
	<author>Anonymous</author>
	<datestamp>1258364580000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext>One word: iPhone app.<p>Imagine Street View rendered in the direction you are holding your phone, from your position.  With all the goodies that that 3D map that someone was building a while back (and sure could be ongoing) plus a live application of the algorithm from Canoma and similar applications, you could have a pretty interesting "virtual" world.  Another benefit would be that while using the application, you could be aiding the mapping backend with live GPS to refine the map and the 3D model on top of it.</p></htmltext>
<tokenext>One word : iPhone app.Imagine Street View rendered in the direction you are holding your phone , from your position .
With all the goodies that that 3D map that someone was building a while back ( and sure could be ongoing ) plus a live application of the algorithm from Canoma and similar applications , you could have a pretty interesting " virtual " world .
Another benefit would be that while using the application , you could be aiding the mapping backend with live GPS to refine the map and the 3D model on top of it .</tokentext>
<sentencetext>One word: iPhone app.Imagine Street View rendered in the direction you are holding your phone, from your position.
With all the goodies that that 3D map that someone was building a while back (and sure could be ongoing) plus a live application of the algorithm from Canoma and similar applications, you could have a pretty interesting "virtual" world.
Another benefit would be that while using the application, you could be aiding the mapping backend with live GPS to refine the map and the 3D model on top of it.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112332</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112002</id>
	<title>first post</title>
	<author>Anonymous</author>
	<datestamp>1258306740000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>i fucking win</p></htmltext>
<tokenext>i fucking win</tokentext>
<sentencetext>i fucking win</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112016</id>
	<title>Hours and hours</title>
	<author>Anonymous</author>
	<datestamp>1258306860000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext><p><div class="quote"><p>such images would normally take hours to compute even on a high-end unassisted workstation</p></div><p>Now, they take hours to download over your GSM network.</p></div>
	</htmltext>
<tokenext>such images would normally take hours to compute even on a high-end unassisted workstationNow , they take hours to download over your GSM network .</tokentext>
<sentencetext>such images would normally take hours to compute even on a high-end unassisted workstationNow, they take hours to download over your GSM network.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30117408</id>
	<title>Re:This is Old Technology</title>
	<author>Anonymous</author>
	<datestamp>1258395960000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>text, bah, that's fast... I guess it'll take a few more months before you get the 'mms' of me and her doing it on your kitchen table...</p></htmltext>
<tokenext>text , bah , that 's fast... I guess it 'll take a few more months before you get the 'mms ' of me and her doing it on your kitchen table.. .</tokentext>
<sentencetext>text, bah, that's fast... I guess it'll take a few more months before you get the 'mms' of me and her doing it on your kitchen table...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112302</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30117170</id>
	<title>Impressive, but...</title>
	<author>KnownIssues</author>
	<datestamp>1258395060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Calling this a "real-time" raytracing server seems a bit disingenuous. Is it blazing fast? Yes. Is it real-time (as the demo claims)? I suppose it's symantics, but I think it would have to be x-frames-per-second to be real-time. TFA calling it near-real-time seems a little more reasonable, but still hype. Can "within seconds" still be considered even near-real-time?</htmltext>
<tokenext>Calling this a " real-time " raytracing server seems a bit disingenuous .
Is it blazing fast ?
Yes. Is it real-time ( as the demo claims ) ?
I suppose it 's symantics , but I think it would have to be x-frames-per-second to be real-time .
TFA calling it near-real-time seems a little more reasonable , but still hype .
Can " within seconds " still be considered even near-real-time ?</tokentext>
<sentencetext>Calling this a "real-time" raytracing server seems a bit disingenuous.
Is it blazing fast?
Yes. Is it real-time (as the demo claims)?
I suppose it's symantics, but I think it would have to be x-frames-per-second to be real-time.
TFA calling it near-real-time seems a little more reasonable, but still hype.
Can "within seconds" still be considered even near-real-time?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30115664</id>
	<title>Ohhh, that's what it's for</title>
	<author>metalhed77</author>
	<datestamp>1258388460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I, like many of you here was wondering what the hell this could possibly be useful for, up until I viewed the video.</p><p>The answer, clearly, is porn.</p><p>It all makes sense now!</p></htmltext>
<tokenext>I , like many of you here was wondering what the hell this could possibly be useful for , up until I viewed the video.The answer , clearly , is porn.It all makes sense now !</tokentext>
<sentencetext>I, like many of you here was wondering what the hell this could possibly be useful for, up until I viewed the video.The answer, clearly, is porn.It all makes sense now!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112360</id>
	<title>Concept is kinda cool, but...</title>
	<author>topham</author>
	<datestamp>1258311180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The concept is kinda cool but their demo could have been easily faked. It isn't convincing until I can wander around the room on demand while tweaking the environment.<br>As well, it's next to useless if it takes a $15K machine to generate the required images in pseudo-realtime for a single session.<br>(useless in the remote access sense, not necessarily useless in a studio environment for architecture  or vehicle modelling; although those needs can be met with a rendered video sequence anyway.</p></htmltext>
<tokenext>The concept is kinda cool but their demo could have been easily faked .
It is n't convincing until I can wander around the room on demand while tweaking the environment.As well , it 's next to useless if it takes a $ 15K machine to generate the required images in pseudo-realtime for a single session .
( useless in the remote access sense , not necessarily useless in a studio environment for architecture or vehicle modelling ; although those needs can be met with a rendered video sequence anyway .</tokentext>
<sentencetext>The concept is kinda cool but their demo could have been easily faked.
It isn't convincing until I can wander around the room on demand while tweaking the environment.As well, it's next to useless if it takes a $15K machine to generate the required images in pseudo-realtime for a single session.
(useless in the remote access sense, not necessarily useless in a studio environment for architecture  or vehicle modelling; although those needs can be met with a rendered video sequence anyway.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30121478</id>
	<title>Progressive refinement, not progressive JPEGs</title>
	<author>kopo</author>
	<datestamp>1258366140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>The video narration is inaccurate. What you see there is not a progressive JPEG loading (they <em>might</em> be using progressive compression for the JPEG, but it doesn't matter).<br>What you're seeing is <em>progressive refinement</em>, which is a raytracing rendering technique that starts to show an image immediately and continuously adds detail (rather than rendering the image in full detail immediately). The light and dark splotches you initially see are a typical artifact of low-detail radiosity rendering.<br>More information <a href="http://www.google.com/search?q=progressive+refinement" title="google.com" rel="nofollow">here</a> [google.com].</htmltext>
<tokenext>The video narration is inaccurate .
What you see there is not a progressive JPEG loading ( they might be using progressive compression for the JPEG , but it does n't matter ) .What you 're seeing is progressive refinement , which is a raytracing rendering technique that starts to show an image immediately and continuously adds detail ( rather than rendering the image in full detail immediately ) .
The light and dark splotches you initially see are a typical artifact of low-detail radiosity rendering.More information here [ google.com ] .</tokentext>
<sentencetext>The video narration is inaccurate.
What you see there is not a progressive JPEG loading (they might be using progressive compression for the JPEG, but it doesn't matter).What you're seeing is progressive refinement, which is a raytracing rendering technique that starts to show an image immediately and continuously adds detail (rather than rendering the image in full detail immediately).
The light and dark splotches you initially see are a typical artifact of low-detail radiosity rendering.More information here [google.com].</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30116082</id>
	<title>Re:Hours and hours</title>
	<author>Anonymous</author>
	<datestamp>1258390440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>How... boobular.</htmltext>
<tokenext>How... boobular .</tokentext>
<sentencetext>How... boobular.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112278</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112164</id>
	<title>Danger to America and GOD and FAMILY</title>
	<author>For a Free Internet</author>
	<datestamp>1258308660000</datestamp>
	<modclass>None</modclass>
	<modscore>-1</modscore>
	<htmltext><p>This devious "innovation" is just perfect for Italians to use to seduce young American boys into a life of Islamo-Communism through psychedelic "ray"-traced hypnosis on their youtubes. And why is Barack Hussein Obama silent about this? I wonder!!!!</p></htmltext>
<tokenext>This devious " innovation " is just perfect for Italians to use to seduce young American boys into a life of Islamo-Communism through psychedelic " ray " -traced hypnosis on their youtubes .
And why is Barack Hussein Obama silent about this ?
I wonder ! ! !
!</tokentext>
<sentencetext>This devious "innovation" is just perfect for Italians to use to seduce young American boys into a life of Islamo-Communism through psychedelic "ray"-traced hypnosis on their youtubes.
And why is Barack Hussein Obama silent about this?
I wonder!!!
!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112460</id>
	<title>Re:One question: Why?</title>
	<author>Anonymous</author>
	<datestamp>1258312740000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>dude, think of the porno</p></htmltext>
<tokenext>dude , think of the porno</tokentext>
<sentencetext>dude, think of the porno</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112332</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112690</id>
	<title>Resident Evil</title>
	<author>plasticsquirrel</author>
	<datestamp>1258402320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Resident Evil and a number of other action/adventure and RPG games from the mid to late 90's innovated this, albeit in a much more limited way. Character enters a room, switch to another image. Character progresses further into the room, switch to a more appropriate angle. All the environments are pre-rendered, and 3D characters play around in them as though they are real-time. It always looked good on the PS1, and I admired the simplicity of the method and its impressive results. It looks like they are just making natural extensions to it, by streaming the images over the Internet in real-time, so the various elements can all be updated on a rendering farm. I wonder how efficient it will be, though, and what sort of compromises will be made? Obviously, streaming high-res images in real-time is an excellent way to devour massive amounts of bandwidth....<br> <br>I never understood why people were so keen to move away from isometric perspectives and pre-rendered backgrounds? As long as the angles are good, they can really help to present the game in a well-crafted and artful way.</htmltext>
<tokenext>Resident Evil and a number of other action/adventure and RPG games from the mid to late 90 's innovated this , albeit in a much more limited way .
Character enters a room , switch to another image .
Character progresses further into the room , switch to a more appropriate angle .
All the environments are pre-rendered , and 3D characters play around in them as though they are real-time .
It always looked good on the PS1 , and I admired the simplicity of the method and its impressive results .
It looks like they are just making natural extensions to it , by streaming the images over the Internet in real-time , so the various elements can all be updated on a rendering farm .
I wonder how efficient it will be , though , and what sort of compromises will be made ?
Obviously , streaming high-res images in real-time is an excellent way to devour massive amounts of bandwidth.... I never understood why people were so keen to move away from isometric perspectives and pre-rendered backgrounds ?
As long as the angles are good , they can really help to present the game in a well-crafted and artful way .</tokentext>
<sentencetext>Resident Evil and a number of other action/adventure and RPG games from the mid to late 90's innovated this, albeit in a much more limited way.
Character enters a room, switch to another image.
Character progresses further into the room, switch to a more appropriate angle.
All the environments are pre-rendered, and 3D characters play around in them as though they are real-time.
It always looked good on the PS1, and I admired the simplicity of the method and its impressive results.
It looks like they are just making natural extensions to it, by streaming the images over the Internet in real-time, so the various elements can all be updated on a rendering farm.
I wonder how efficient it will be, though, and what sort of compromises will be made?
Obviously, streaming high-res images in real-time is an excellent way to devour massive amounts of bandwidth.... I never understood why people were so keen to move away from isometric perspectives and pre-rendered backgrounds?
As long as the angles are good, they can really help to present the game in a well-crafted and artful way.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112022</id>
	<title>Yay!</title>
	<author>Dan East</author>
	<datestamp>1258306920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Alright, now I can play Doom 3 on my Razr cell phone! 0.2 FPS here I come!</p></htmltext>
<tokenext>Alright , now I can play Doom 3 on my Razr cell phone !
0.2 FPS here I come !</tokentext>
<sentencetext>Alright, now I can play Doom 3 on my Razr cell phone!
0.2 FPS here I come!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30113058</id>
	<title>Re:Concept is kinda cool, but...</title>
	<author>im\_thatoneguy</author>
	<datestamp>1258364040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I know slashdot is keen on saying "The Cloud" is a buzzword and meaningless bullshit.  But that's because Slashdot is evidently completely clueless to what cloud computing really means.  What it means in this case is you pay for the processing you need.   You don't buy a $15k server.  You pay Amazon or Google or some other cloud processor for render time.  If you need 3 seconds of rendering then they charge you 3c for the trouble.</p></htmltext>
<tokenext>I know slashdot is keen on saying " The Cloud " is a buzzword and meaningless bullshit .
But that 's because Slashdot is evidently completely clueless to what cloud computing really means .
What it means in this case is you pay for the processing you need .
You do n't buy a $ 15k server .
You pay Amazon or Google or some other cloud processor for render time .
If you need 3 seconds of rendering then they charge you 3c for the trouble .</tokentext>
<sentencetext>I know slashdot is keen on saying "The Cloud" is a buzzword and meaningless bullshit.
But that's because Slashdot is evidently completely clueless to what cloud computing really means.
What it means in this case is you pay for the processing you need.
You don't buy a $15k server.
You pay Amazon or Google or some other cloud processor for render time.
If you need 3 seconds of rendering then they charge you 3c for the trouble.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112360</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30114746</id>
	<title>You haven't thought your cunning plan through.</title>
	<author>jeffb (2.718)</author>
	<datestamp>1258384320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>So, you're going to prepare high-quality images in response to requests from mobile devices.  Your "cloud", a vast farm of massively powerful rendering engines, will prepare these images thousands of times more quickly than your iPhone's pathetic processor, and stream them back to your display.  Neato.</p><p>Now, since this works so well, millions of mobile users will flock to the service.  Thousands at a time will be requesting images.  Fortunately, that render farm is still thousands of times faster than a mobile device, so each of those requests will be rendered as quickly as -- well, about as quickly as a single mobile device could do it.</p><p>Getting increased speed out of a cloud only works when you've got a relatively large number of cloud machines and a relatively small number of clients.  If you've got more clients than hosts, all you've done is added a lot of communication overhead and some slick load-balancing.</p></htmltext>
<tokenext>So , you 're going to prepare high-quality images in response to requests from mobile devices .
Your " cloud " , a vast farm of massively powerful rendering engines , will prepare these images thousands of times more quickly than your iPhone 's pathetic processor , and stream them back to your display .
Neato.Now , since this works so well , millions of mobile users will flock to the service .
Thousands at a time will be requesting images .
Fortunately , that render farm is still thousands of times faster than a mobile device , so each of those requests will be rendered as quickly as -- well , about as quickly as a single mobile device could do it.Getting increased speed out of a cloud only works when you 've got a relatively large number of cloud machines and a relatively small number of clients .
If you 've got more clients than hosts , all you 've done is added a lot of communication overhead and some slick load-balancing .</tokentext>
<sentencetext>So, you're going to prepare high-quality images in response to requests from mobile devices.
Your "cloud", a vast farm of massively powerful rendering engines, will prepare these images thousands of times more quickly than your iPhone's pathetic processor, and stream them back to your display.
Neato.Now, since this works so well, millions of mobile users will flock to the service.
Thousands at a time will be requesting images.
Fortunately, that render farm is still thousands of times faster than a mobile device, so each of those requests will be rendered as quickly as -- well, about as quickly as a single mobile device could do it.Getting increased speed out of a cloud only works when you've got a relatively large number of cloud machines and a relatively small number of clients.
If you've got more clients than hosts, all you've done is added a lot of communication overhead and some slick load-balancing.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112088</id>
	<title>Bad name, no buzz?</title>
	<author>Fotograf</author>
	<datestamp>1258307820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>they should have called it

CLOUD REALITY!</htmltext>
<tokenext>they should have called it CLOUD REALITY !</tokentext>
<sentencetext>they should have called it

CLOUD REALITY!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30115624</id>
	<title>Re:One question: Why?</title>
	<author>pwfffff</author>
	<datestamp>1258388280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>"Imagine Street View rendered in the direction you are holding your phone, from your position."</p><p>Congratulations, you've invented AR, which has been an app on my phone for about a year now. It's called using the input from the goddamn camera stuck on the front.</p></htmltext>
<tokenext>" Imagine Street View rendered in the direction you are holding your phone , from your position .
" Congratulations , you 've invented AR , which has been an app on my phone for about a year now .
It 's called using the input from the goddamn camera stuck on the front .</tokentext>
<sentencetext>"Imagine Street View rendered in the direction you are holding your phone, from your position.
"Congratulations, you've invented AR, which has been an app on my phone for about a year now.
It's called using the input from the goddamn camera stuck on the front.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30113098</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30114662</id>
	<title>Re:Good for VR</title>
	<author>LS</author>
	<datestamp>1258383600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>[they were supposed to replace worker's PCs for word processing, spreadsheets, etc].</p></div><p>Um they have for a large portion of the working populace.  The last two companies I've worked at use Google docs.</p><p>LS</p></div>
	</htmltext>
<tokenext>[ they were supposed to replace worker 's PCs for word processing , spreadsheets , etc ] .Um they have for a large portion of the working populace .
The last two companies I 've worked at use Google docs.LS</tokentext>
<sentencetext>[they were supposed to replace worker's PCs for word processing, spreadsheets, etc].Um they have for a large portion of the working populace.
The last two companies I've worked at use Google docs.LS
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112682</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112302</id>
	<title>This is Old Technology</title>
	<author>webbiedave</author>
	<datestamp>1258310400000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext>I got some reality served to my phone last week in the form of a break up text from my girlfriend. It took four months to render.</htmltext>
<tokenext>I got some reality served to my phone last week in the form of a break up text from my girlfriend .
It took four months to render .</tokentext>
<sentencetext>I got some reality served to my phone last week in the form of a break up text from my girlfriend.
It took four months to render.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112446</id>
	<title>Preserve Sanity</title>
	<author>Anonymous</author>
	<datestamp>1258312560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>With nVidia involved you can be assured that sanity will play no part.<br>1.7\% VHS wood screws, the way it's meant to be renamed.</p></htmltext>
<tokenext>With nVidia involved you can be assured that sanity will play no part.1.7 \ % VHS wood screws , the way it 's meant to be renamed .</tokentext>
<sentencetext>With nVidia involved you can be assured that sanity will play no part.1.7\% VHS wood screws, the way it's meant to be renamed.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112008</id>
	<title>Vaporware.</title>
	<author>Anonymous</author>
	<datestamp>1258306740000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I call it.</p></htmltext>
<tokenext>I call it .</tokentext>
<sentencetext>I call it.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112278</id>
	<title>Re:Hours and hours</title>
	<author>Romancer</author>
	<datestamp>1258310160000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>Better demo of the capabilities here:</p><p><a href="http://www.youtube.com/watch?v=atcIv1K\_gVI&amp;feature=related" title="youtube.com">http://www.youtube.com/watch?v=atcIv1K\_gVI&amp;feature=related</a> [youtube.com]</p></htmltext>
<tokenext>Better demo of the capabilities here : http : //www.youtube.com/watch ? v = atcIv1K \ _gVI&amp;feature = related [ youtube.com ]</tokentext>
<sentencetext>Better demo of the capabilities here:http://www.youtube.com/watch?v=atcIv1K\_gVI&amp;feature=related [youtube.com]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112016</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30119100</id>
	<title>Re:Hours and hours</title>
	<author>ElizabethGreene</author>
	<datestamp>1258400940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>How long until Barswf or Pyrit is ported onto that cloud?<nobr> <wbr></nobr>:D</p></htmltext>
<tokenext>How long until Barswf or Pyrit is ported onto that cloud ?
: D</tokentext>
<sentencetext>How long until Barswf or Pyrit is ported onto that cloud?
:D</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112016</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112472</id>
	<title>Re:One question: Why?</title>
	<author>Anonymous</author>
	<datestamp>1258312860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If they start increasing the number of options, a la the Scion brand, then that quickly becomes impractical or impossible. Far easier to render and cache temporarily than to store all possible renders.</p></htmltext>
<tokenext>If they start increasing the number of options , a la the Scion brand , then that quickly becomes impractical or impossible .
Far easier to render and cache temporarily than to store all possible renders .</tokentext>
<sentencetext>If they start increasing the number of options, a la the Scion brand, then that quickly becomes impractical or impossible.
Far easier to render and cache temporarily than to store all possible renders.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112332</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112332</id>
	<title>One question:  Why?</title>
	<author>adolf</author>
	<datestamp>1258310820000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>Summit, in TFA, goes on at different points about a car application -- ie, a system that one might use to preview and/or order new cars.  Pick your wheels, your paint, your trim, your seats, and get a few views of the thing in short order*.</p><p>All I can think is that if it were really so important for Ford to give you a raytraced view of the car you're ordering, that the options are so limited that all of them could easily be pre-rendered and send all together.  How big are a few dozen JPEGs, anyway?</p><p>Even if a few dozen JPEGs isn't enough:  Don't we do this already with car manufacturer websites, using little more than bog-standard HTML and a whole bunch of prerendered images?  In what way would having this stuff be rendered in real-time be any more advantageous than doing it in advance?</p><p>Do we really need some manner of fancy client-server process, with some badass cloud architecture behind it, when at the end of the day, we're only going to be shown artificat-filled progressive-JPEG still frames with a finite number of possibilities?</p><p>Everyone, please, go look at the demo video.  Neat stuff, I guess, but it's boring.  Office with blinds open; same office, blinds partly open.  Then, closed.  Office at night.  Different angle.  Woo.  It's simple math to figure out how many options there are, and it's just as simple to see that it's easier, cheaper, and better to just go ahead and render ALL of them in advance and be done with it and just serve out static images from then on out.</p><p>If I'm really missing the point here (and I hope I am), would someone please enlighten me as to how this might actually, you know, <i>solve a problem</i>?</p><p>*:  Just like a lot of auto manufacturer's websites already do TODAY, using only HTML, static images, and a sprinkling of javascript or (less often) flash.</p></htmltext>
<tokenext>Summit , in TFA , goes on at different points about a car application -- ie , a system that one might use to preview and/or order new cars .
Pick your wheels , your paint , your trim , your seats , and get a few views of the thing in short order * .All I can think is that if it were really so important for Ford to give you a raytraced view of the car you 're ordering , that the options are so limited that all of them could easily be pre-rendered and send all together .
How big are a few dozen JPEGs , anyway ? Even if a few dozen JPEGs is n't enough : Do n't we do this already with car manufacturer websites , using little more than bog-standard HTML and a whole bunch of prerendered images ?
In what way would having this stuff be rendered in real-time be any more advantageous than doing it in advance ? Do we really need some manner of fancy client-server process , with some badass cloud architecture behind it , when at the end of the day , we 're only going to be shown artificat-filled progressive-JPEG still frames with a finite number of possibilities ? Everyone , please , go look at the demo video .
Neat stuff , I guess , but it 's boring .
Office with blinds open ; same office , blinds partly open .
Then , closed .
Office at night .
Different angle .
Woo. It 's simple math to figure out how many options there are , and it 's just as simple to see that it 's easier , cheaper , and better to just go ahead and render ALL of them in advance and be done with it and just serve out static images from then on out.If I 'm really missing the point here ( and I hope I am ) , would someone please enlighten me as to how this might actually , you know , solve a problem ?
* : Just like a lot of auto manufacturer 's websites already do TODAY , using only HTML , static images , and a sprinkling of javascript or ( less often ) flash .</tokentext>
<sentencetext>Summit, in TFA, goes on at different points about a car application -- ie, a system that one might use to preview and/or order new cars.
Pick your wheels, your paint, your trim, your seats, and get a few views of the thing in short order*.All I can think is that if it were really so important for Ford to give you a raytraced view of the car you're ordering, that the options are so limited that all of them could easily be pre-rendered and send all together.
How big are a few dozen JPEGs, anyway?Even if a few dozen JPEGs isn't enough:  Don't we do this already with car manufacturer websites, using little more than bog-standard HTML and a whole bunch of prerendered images?
In what way would having this stuff be rendered in real-time be any more advantageous than doing it in advance?Do we really need some manner of fancy client-server process, with some badass cloud architecture behind it, when at the end of the day, we're only going to be shown artificat-filled progressive-JPEG still frames with a finite number of possibilities?Everyone, please, go look at the demo video.
Neat stuff, I guess, but it's boring.
Office with blinds open; same office, blinds partly open.
Then, closed.
Office at night.
Different angle.
Woo.  It's simple math to figure out how many options there are, and it's just as simple to see that it's easier, cheaper, and better to just go ahead and render ALL of them in advance and be done with it and just serve out static images from then on out.If I'm really missing the point here (and I hope I am), would someone please enlighten me as to how this might actually, you know, solve a problem?
*:  Just like a lot of auto manufacturer's websites already do TODAY, using only HTML, static images, and a sprinkling of javascript or (less often) flash.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112726</id>
	<title>Re:Yay!</title>
	<author>Anonymous</author>
	<datestamp>1258402620000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Reminds me of Myst more than anything.</p></htmltext>
<tokenext>Reminds me of Myst more than anything .</tokentext>
<sentencetext>Reminds me of Myst more than anything.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112022</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112300</id>
	<title>Re:Better to just edit it on a computer</title>
	<author>MobileTatsu-NJG</author>
	<datestamp>1258310400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>The only use would probably be for sales men and designers who want to show their work in different lighting to a potential customer... but even then they could render that ahead of time. I just don't get it, could someone enlighten me?</p></div><p>They can only do that ahead of time if they're the ones making the aesthetic decisions.  If I wanted to show the director of a movie an environment and get his feedback, I could make the changes right there for him to see and get the OK right away.</p></div>
	</htmltext>
<tokenext>The only use would probably be for sales men and designers who want to show their work in different lighting to a potential customer... but even then they could render that ahead of time .
I just do n't get it , could someone enlighten me ? They can only do that ahead of time if they 're the ones making the aesthetic decisions .
If I wanted to show the director of a movie an environment and get his feedback , I could make the changes right there for him to see and get the OK right away .</tokentext>
<sentencetext>The only use would probably be for sales men and designers who want to show their work in different lighting to a potential customer... but even then they could render that ahead of time.
I just don't get it, could someone enlighten me?They can only do that ahead of time if they're the ones making the aesthetic decisions.
If I wanted to show the director of a movie an environment and get his feedback, I could make the changes right there for him to see and get the OK right away.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112084</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112052</id>
	<title>Didn't we see this slashvertisement before</title>
	<author>Anonymous</author>
	<datestamp>1258307220000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext>... like <a href="http://tech.slashdot.org/article.pl?sid=09/11/13/199249" title="slashdot.org">two days ago</a> [slashdot.org]<nobr> <wbr></nobr>...</htmltext>
<tokenext>... like two days ago [ slashdot.org ] .. .</tokentext>
<sentencetext>... like two days ago [slashdot.org] ...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112006</id>
	<title>Who cares</title>
	<author>Adolf Hitroll</author>
	<datestamp>1258306740000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>-1</modscore>
	<htmltext><p>NVidia make shit, their drivers are horrible.</p></htmltext>
<tokenext>NVidia make shit , their drivers are horrible .</tokentext>
<sentencetext>NVidia make shit, their drivers are horrible.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30114306</id>
	<title>Still...</title>
	<author>widelight</author>
	<datestamp>1258380480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Still no cure for cancer<nobr> <wbr></nobr>:(</p></htmltext>
<tokenext>Still no cure for cancer : (</tokentext>
<sentencetext>Still no cure for cancer :(</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112084</id>
	<title>Better to just edit it on a computer</title>
	<author>pieisgood</author>
	<datestamp>1258307760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I'd rather manage my scenes on my own computer where I have a complete interface with the work I've done. If they have a service where I could upload my scenes and have them render them for me quickly I'd be happy... but they have to do this real time stuff with minimal ability to edit and experiment with your scene.

The only use would probably be for sales men and designers who want to show their work in different lighting to a potential customer... but even then they could render that ahead of time.

I just don't get it, could someone enlighten me?</htmltext>
<tokenext>I 'd rather manage my scenes on my own computer where I have a complete interface with the work I 've done .
If they have a service where I could upload my scenes and have them render them for me quickly I 'd be happy... but they have to do this real time stuff with minimal ability to edit and experiment with your scene .
The only use would probably be for sales men and designers who want to show their work in different lighting to a potential customer... but even then they could render that ahead of time .
I just do n't get it , could someone enlighten me ?</tokentext>
<sentencetext>I'd rather manage my scenes on my own computer where I have a complete interface with the work I've done.
If they have a service where I could upload my scenes and have them render them for me quickly I'd be happy... but they have to do this real time stuff with minimal ability to edit and experiment with your scene.
The only use would probably be for sales men and designers who want to show their work in different lighting to a potential customer... but even then they could render that ahead of time.
I just don't get it, could someone enlighten me?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30113300</id>
	<title>Too specific</title>
	<author>Jeppe Salvesen</author>
	<datestamp>1258367520000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>The uses are probably not yet understood. This is cool technology and some of the tens of millions of developers will find good use for it. The interesting bit is that you gain access to a huge render farm without buying a lot of servers. If your load is uneven, this service will save you a lot of money (and power too).</p><p>Anyhow, from the top of my head: Cars, architecture, city planning, visualizing climate change, next-generation GPS navigation devices.</p></htmltext>
<tokenext>The uses are probably not yet understood .
This is cool technology and some of the tens of millions of developers will find good use for it .
The interesting bit is that you gain access to a huge render farm without buying a lot of servers .
If your load is uneven , this service will save you a lot of money ( and power too ) .Anyhow , from the top of my head : Cars , architecture , city planning , visualizing climate change , next-generation GPS navigation devices .</tokentext>
<sentencetext>The uses are probably not yet understood.
This is cool technology and some of the tens of millions of developers will find good use for it.
The interesting bit is that you gain access to a huge render farm without buying a lot of servers.
If your load is uneven, this service will save you a lot of money (and power too).Anyhow, from the top of my head: Cars, architecture, city planning, visualizing climate change, next-generation GPS navigation devices.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112332</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112682</id>
	<title>Good for VR</title>
	<author>Anonymous</author>
	<datestamp>1258402320000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>This is a great advancement for high end virtual reality systems, but the current state of "rendering in the cloud" sounds like either a solution looking for a problem or the wrong application of the technology.</p><p>On a future Internet with sub 30 ms latency, this would ROCK.  [You could have low-powered wearable augmented reality devices, "Rainbows End" style gaming, and maybe even the engine behind a Snow Crash style metaverse that remote users can log in to].</p><p>NVidia is NOT doing itself a favor with the lame empty office with boring blinds demo.  They'd better come up with something sexier quick if they want to sell this (and I don't mean the remote avatar someone posted a link to).</p><p>This reminds me of the "thin client" hype circa 1999.  "Thin clients" exist now in the form of AJAX enabled web browsers, Netbooks, phones etc, but that technology took about a decade to come to fruition and found a different (and more limited) niche than all the hype a decade ago [they were supposed to replace worker's PCs for word processing, spreadsheets, etc].</p></htmltext>
<tokenext>This is a great advancement for high end virtual reality systems , but the current state of " rendering in the cloud " sounds like either a solution looking for a problem or the wrong application of the technology.On a future Internet with sub 30 ms latency , this would ROCK .
[ You could have low-powered wearable augmented reality devices , " Rainbows End " style gaming , and maybe even the engine behind a Snow Crash style metaverse that remote users can log in to ] .NVidia is NOT doing itself a favor with the lame empty office with boring blinds demo .
They 'd better come up with something sexier quick if they want to sell this ( and I do n't mean the remote avatar someone posted a link to ) .This reminds me of the " thin client " hype circa 1999 .
" Thin clients " exist now in the form of AJAX enabled web browsers , Netbooks , phones etc , but that technology took about a decade to come to fruition and found a different ( and more limited ) niche than all the hype a decade ago [ they were supposed to replace worker 's PCs for word processing , spreadsheets , etc ] .</tokentext>
<sentencetext>This is a great advancement for high end virtual reality systems, but the current state of "rendering in the cloud" sounds like either a solution looking for a problem or the wrong application of the technology.On a future Internet with sub 30 ms latency, this would ROCK.
[You could have low-powered wearable augmented reality devices, "Rainbows End" style gaming, and maybe even the engine behind a Snow Crash style metaverse that remote users can log in to].NVidia is NOT doing itself a favor with the lame empty office with boring blinds demo.
They'd better come up with something sexier quick if they want to sell this (and I don't mean the remote avatar someone posted a link to).This reminds me of the "thin client" hype circa 1999.
"Thin clients" exist now in the form of AJAX enabled web browsers, Netbooks, phones etc, but that technology took about a decade to come to fruition and found a different (and more limited) niche than all the hype a decade ago [they were supposed to replace worker's PCs for word processing, spreadsheets, etc].</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112672</id>
	<title>Re:One question: Why?</title>
	<author>war4peace</author>
	<datestamp>1258402140000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>One answer: Gaming.<br>
OK, one more reason: 3D Work at home. I do that (as an amateur) and sometimes even my pretty fast machine takes hours at a time to render some scenes. I could as well send the file to RealityServer 3.0 and then render my scenes faster via a web browser, without having to wait hours and hours. That would be great for several reasons:<br>
1. While I wait for my machine to render a scene, I do other things and more than often I ask myself what the hell was that thing that I awas trying to accomplish? With RealityServer, no more (long) interruptions.<br>
2. Power consumption: a CPU at max thrust will eat more power and generate more heat. I'd rather not have it do that.<br>
3. Higher efficiency. Hours of waiting equals lost productivity.<br>
Useless technology? Maybe. But thjat's what they said about the train and the plane, back in the days. Time will tell. For now, new tech? Bring it on! The more, the merrier. Hey, at least we get to choose<nobr> <wbr></nobr>:)</htmltext>
<tokenext>One answer : Gaming .
OK , one more reason : 3D Work at home .
I do that ( as an amateur ) and sometimes even my pretty fast machine takes hours at a time to render some scenes .
I could as well send the file to RealityServer 3.0 and then render my scenes faster via a web browser , without having to wait hours and hours .
That would be great for several reasons : 1 .
While I wait for my machine to render a scene , I do other things and more than often I ask myself what the hell was that thing that I awas trying to accomplish ?
With RealityServer , no more ( long ) interruptions .
2. Power consumption : a CPU at max thrust will eat more power and generate more heat .
I 'd rather not have it do that .
3. Higher efficiency .
Hours of waiting equals lost productivity .
Useless technology ?
Maybe. But thjat 's what they said about the train and the plane , back in the days .
Time will tell .
For now , new tech ?
Bring it on !
The more , the merrier .
Hey , at least we get to choose : )</tokentext>
<sentencetext>One answer: Gaming.
OK, one more reason: 3D Work at home.
I do that (as an amateur) and sometimes even my pretty fast machine takes hours at a time to render some scenes.
I could as well send the file to RealityServer 3.0 and then render my scenes faster via a web browser, without having to wait hours and hours.
That would be great for several reasons:
1.
While I wait for my machine to render a scene, I do other things and more than often I ask myself what the hell was that thing that I awas trying to accomplish?
With RealityServer, no more (long) interruptions.
2. Power consumption: a CPU at max thrust will eat more power and generate more heat.
I'd rather not have it do that.
3. Higher efficiency.
Hours of waiting equals lost productivity.
Useless technology?
Maybe. But thjat's what they said about the train and the plane, back in the days.
Time will tell.
For now, new tech?
Bring it on!
The more, the merrier.
Hey, at least we get to choose :)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112332</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30113040</id>
	<title>Re:One question: Why?</title>
	<author>im\_thatoneguy</author>
	<datestamp>1258363800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Speaking from experience... it's currently a HUGE PITA.</p><p>Sure if you have just a side view and a front view it's easy.  Render out each wheel seperately.  But then what if you want a 360 view of the car now? Ooops. No dice.   And what if you want the car color to be reflected in the side view mirrors?    All the possible combinations?  Well if you give the user complete freedom that means there is an infinite number of renderings you have to do.  What if you want to see the car at night?  Now you have to double all of your renderings and Flash code.  What if you want to see the car on a street corner... another complete set of renderings.  Street corner at night?  Another set of renderings. What if you want to see the car from the drivers's seat.  Another set of renderings.   What if you want to see it from the passenger seat? another complete set of renderings.   What if you want to see it with a door open.... you guessed it, another set of renderings, what if you want to see two doors open well now you have a whole rats nest of dependencies you have to render.</p><p>Sure.  If you're perfectly happy with slightly editable product brochures then the current system is fine. But if you want to add more interactivity to the user then you'll blow through the cost of 8 Tegras in a matter of days paying someone like me to make a bazillion renderings and then someone else to write the flash code to make it all interactive.</p><p>By comparison setting of a few animation and visibility switches is trivial.</p></htmltext>
<tokenext>Speaking from experience... it 's currently a HUGE PITA.Sure if you have just a side view and a front view it 's easy .
Render out each wheel seperately .
But then what if you want a 360 view of the car now ?
Ooops. No dice .
And what if you want the car color to be reflected in the side view mirrors ?
All the possible combinations ?
Well if you give the user complete freedom that means there is an infinite number of renderings you have to do .
What if you want to see the car at night ?
Now you have to double all of your renderings and Flash code .
What if you want to see the car on a street corner... another complete set of renderings .
Street corner at night ?
Another set of renderings .
What if you want to see the car from the drivers 's seat .
Another set of renderings .
What if you want to see it from the passenger seat ?
another complete set of renderings .
What if you want to see it with a door open.... you guessed it , another set of renderings , what if you want to see two doors open well now you have a whole rats nest of dependencies you have to render.Sure .
If you 're perfectly happy with slightly editable product brochures then the current system is fine .
But if you want to add more interactivity to the user then you 'll blow through the cost of 8 Tegras in a matter of days paying someone like me to make a bazillion renderings and then someone else to write the flash code to make it all interactive.By comparison setting of a few animation and visibility switches is trivial .</tokentext>
<sentencetext>Speaking from experience... it's currently a HUGE PITA.Sure if you have just a side view and a front view it's easy.
Render out each wheel seperately.
But then what if you want a 360 view of the car now?
Ooops. No dice.
And what if you want the car color to be reflected in the side view mirrors?
All the possible combinations?
Well if you give the user complete freedom that means there is an infinite number of renderings you have to do.
What if you want to see the car at night?
Now you have to double all of your renderings and Flash code.
What if you want to see the car on a street corner... another complete set of renderings.
Street corner at night?
Another set of renderings.
What if you want to see the car from the drivers's seat.
Another set of renderings.
What if you want to see it from the passenger seat?
another complete set of renderings.
What if you want to see it with a door open.... you guessed it, another set of renderings, what if you want to see two doors open well now you have a whole rats nest of dependencies you have to render.Sure.
If you're perfectly happy with slightly editable product brochures then the current system is fine.
But if you want to add more interactivity to the user then you'll blow through the cost of 8 Tegras in a matter of days paying someone like me to make a bazillion renderings and then someone else to write the flash code to make it all interactive.By comparison setting of a few animation and visibility switches is trivial.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112332</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_15_2011203_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112300
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112084
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_15_2011203_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30117408
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112302
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_15_2011203_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30115624
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30113098
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112332
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_15_2011203_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112460
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112332
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_15_2011203_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112672
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112332
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_15_2011203_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30123608
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112006
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_15_2011203_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30113300
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112332
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_15_2011203_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30113058
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112360
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_15_2011203_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30119100
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112016
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_15_2011203_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30116082
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112278
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112016
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_15_2011203_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30113040
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112332
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_15_2011203_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30114662
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112682
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_15_2011203_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112726
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112022
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_15_2011203_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112472
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112332
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_15_2011203.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112052
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_15_2011203.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112302
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30117408
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_15_2011203.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112164
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_15_2011203.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112088
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_15_2011203.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30114746
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_15_2011203.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112002
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_15_2011203.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112006
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30123608
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_15_2011203.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112022
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112726
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_15_2011203.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112332
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112460
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112672
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30113300
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30113040
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30113098
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30115624
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112472
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_15_2011203.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112682
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30114662
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_15_2011203.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112446
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_15_2011203.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112016
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112278
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30116082
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30119100
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_15_2011203.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112360
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30113058
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_15_2011203.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112084
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_15_2011203.30112300
</commentlist>
</conversation>
