<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article10_01_18_1336241</id>
	<title>NVIDIA Previews GF100 Features and Architecture</title>
	<author>CmdrTaco</author>
	<datestamp>1263822360000</datestamp>
	<htmltext><a href="http://hothardware.com/" rel="nofollow">MojoKid</a> writes <i>"NVIDIA has decided to disclose more information regarding their <a href="http://hothardware.com/Articles/NVIDIA-GF100-Architecture-and-Feature-Preview/">next generation GF100 GPU architecture</a> today. Also known as Fermi, the GF100 GPU features 512 CUDA cores, 16 geometry units, 4 raster units, 64 texture units, 48 ROPs, and a 384-bit GDDR5 memory interface. If you're keeping count, the older GT200 features 240 CUDA cores, 42 ROPs, and 60 texture units, but the geometry and raster units, as they are implemented in GF100, are not present in the GT200 GPU. The GT200 also features a wider 512-bit memory interface, but the need for such a wide interface is somewhat negated in GF100 due to the fact that it uses GDDR5 memory which effectively offers double the bandwidth of GDDR3, clock for clock. Reportedly, the GF100 will also offer 8x the peak double-precision compute performance as its predecessor, 10x faster context switching, and <a href="http://hothardware.com/Articles/NVIDIA-GF100-Architecture-and-Feature-Preview/?page=2">new anti-aliasing</a> modes."</i></htmltext>
<tokenext>MojoKid writes " NVIDIA has decided to disclose more information regarding their next generation GF100 GPU architecture today .
Also known as Fermi , the GF100 GPU features 512 CUDA cores , 16 geometry units , 4 raster units , 64 texture units , 48 ROPs , and a 384-bit GDDR5 memory interface .
If you 're keeping count , the older GT200 features 240 CUDA cores , 42 ROPs , and 60 texture units , but the geometry and raster units , as they are implemented in GF100 , are not present in the GT200 GPU .
The GT200 also features a wider 512-bit memory interface , but the need for such a wide interface is somewhat negated in GF100 due to the fact that it uses GDDR5 memory which effectively offers double the bandwidth of GDDR3 , clock for clock .
Reportedly , the GF100 will also offer 8x the peak double-precision compute performance as its predecessor , 10x faster context switching , and new anti-aliasing modes .
"</tokentext>
<sentencetext>MojoKid writes "NVIDIA has decided to disclose more information regarding their next generation GF100 GPU architecture today.
Also known as Fermi, the GF100 GPU features 512 CUDA cores, 16 geometry units, 4 raster units, 64 texture units, 48 ROPs, and a 384-bit GDDR5 memory interface.
If you're keeping count, the older GT200 features 240 CUDA cores, 42 ROPs, and 60 texture units, but the geometry and raster units, as they are implemented in GF100, are not present in the GT200 GPU.
The GT200 also features a wider 512-bit memory interface, but the need for such a wide interface is somewhat negated in GF100 due to the fact that it uses GDDR5 memory which effectively offers double the bandwidth of GDDR3, clock for clock.
Reportedly, the GF100 will also offer 8x the peak double-precision compute performance as its predecessor, 10x faster context switching, and new anti-aliasing modes.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807498</id>
	<title>What's with the terrible naming</title>
	<author>LordKronos</author>
	<datestamp>1263827820000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>So we've had this long history with nvidia part numbers gradually increasing. 5000 series, 6000 series, etc. up until the 9000 series. At that point they needed to go to 10000, and the numbers were getting a bit unwieldy. So understandably, the decided to restart with the GT100 series and GT200 series. So now instead of continuing with a 300 series, we're going back to a 100. So we had the GT100 series and now we get the GF100 series? And GF? Serieously? People already abbreviates GeForce as GF, so now when someone says GF we can't be sure what they are talking about. Terrible marketing decision IMHO.</p></htmltext>
<tokenext>So we 've had this long history with nvidia part numbers gradually increasing .
5000 series , 6000 series , etc .
up until the 9000 series .
At that point they needed to go to 10000 , and the numbers were getting a bit unwieldy .
So understandably , the decided to restart with the GT100 series and GT200 series .
So now instead of continuing with a 300 series , we 're going back to a 100 .
So we had the GT100 series and now we get the GF100 series ?
And GF ?
Serieously ? People already abbreviates GeForce as GF , so now when someone says GF we ca n't be sure what they are talking about .
Terrible marketing decision IMHO .</tokentext>
<sentencetext>So we've had this long history with nvidia part numbers gradually increasing.
5000 series, 6000 series, etc.
up until the 9000 series.
At that point they needed to go to 10000, and the numbers were getting a bit unwieldy.
So understandably, the decided to restart with the GT100 series and GT200 series.
So now instead of continuing with a 300 series, we're going back to a 100.
So we had the GT100 series and now we get the GF100 series?
And GF?
Serieously? People already abbreviates GeForce as GF, so now when someone says GF we can't be sure what they are talking about.
Terrible marketing decision IMHO.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808850</id>
	<title>Re:Wait...</title>
	<author>TheKidWho</author>
	<datestamp>1263835320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>No one knows what the power draw actually is for Fermi except for engineers at Nvidia, 280W is highly suspect.</p><p>Also, Semiaccurate? Come on, all that site does is bash Nvidia because of the writers grudge against them.</p><p>Remember when G80 was being released and Charlie said it was "Too hot, too slow, too late."  Yeah that turned out real well didn't it...</p></htmltext>
<tokenext>No one knows what the power draw actually is for Fermi except for engineers at Nvidia , 280W is highly suspect.Also , Semiaccurate ?
Come on , all that site does is bash Nvidia because of the writers grudge against them.Remember when G80 was being released and Charlie said it was " Too hot , too slow , too late .
" Yeah that turned out real well did n't it.. .</tokentext>
<sentencetext>No one knows what the power draw actually is for Fermi except for engineers at Nvidia, 280W is highly suspect.Also, Semiaccurate?
Come on, all that site does is bash Nvidia because of the writers grudge against them.Remember when G80 was being released and Charlie said it was "Too hot, too slow, too late.
"  Yeah that turned out real well didn't it...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807456</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30811524</id>
	<title>Re:wait a minute...</title>
	<author>maestroX</author>
	<datestamp>1263847500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>What happened to GDDR4?</p></div>
</blockquote><p>
It was bundled with Leisure Suit Larry 4.</p></div>
	</htmltext>
<tokenext>What happened to GDDR4 ?
It was bundled with Leisure Suit Larry 4 .</tokentext>
<sentencetext>What happened to GDDR4?
It was bundled with Leisure Suit Larry 4.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807658</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30814504</id>
	<title>Re:Tesselation could rescue PC gaming</title>
	<author>MistrBlank</author>
	<datestamp>1263819840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I'd hardly consider graphics stagnant.</p><p>A perfect example is the difference between Mirror's Edge on the Xbox and PC.  There is a lot more trash floating about, there are a lot more physics involved with glass being shot out and blinds being affected by gunfire and wind in the PC version.   Trust me when I say that graphics advantages are still on the PC side, and my 5770 can't handle Mirror's Edge (a game from over a year ago) at 1080p on my home theatre.  Now it's by no means a top end card, but it is relatively new.  So yes, there is still plenty of room to move forward in processing, even at an equal level of resolution to the console brethren for detail and developers have already taken advantage of downplaying.</p><p>So I don't get what you're saying.  It seems to me like what you are saying already is happening and it isn't doing squat for "rescuing" PC gaming.</p></htmltext>
<tokenext>I 'd hardly consider graphics stagnant.A perfect example is the difference between Mirror 's Edge on the Xbox and PC .
There is a lot more trash floating about , there are a lot more physics involved with glass being shot out and blinds being affected by gunfire and wind in the PC version .
Trust me when I say that graphics advantages are still on the PC side , and my 5770 ca n't handle Mirror 's Edge ( a game from over a year ago ) at 1080p on my home theatre .
Now it 's by no means a top end card , but it is relatively new .
So yes , there is still plenty of room to move forward in processing , even at an equal level of resolution to the console brethren for detail and developers have already taken advantage of downplaying.So I do n't get what you 're saying .
It seems to me like what you are saying already is happening and it is n't doing squat for " rescuing " PC gaming .</tokentext>
<sentencetext>I'd hardly consider graphics stagnant.A perfect example is the difference between Mirror's Edge on the Xbox and PC.
There is a lot more trash floating about, there are a lot more physics involved with glass being shot out and blinds being affected by gunfire and wind in the PC version.
Trust me when I say that graphics advantages are still on the PC side, and my 5770 can't handle Mirror's Edge (a game from over a year ago) at 1080p on my home theatre.
Now it's by no means a top end card, but it is relatively new.
So yes, there is still plenty of room to move forward in processing, even at an equal level of resolution to the console brethren for detail and developers have already taken advantage of downplaying.So I don't get what you're saying.
It seems to me like what you are saying already is happening and it isn't doing squat for "rescuing" PC gaming.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808374</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807436</id>
	<title>Re:Wait...</title>
	<author>Anonymous</author>
	<datestamp>1263827280000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>At the end of TFA it states that the planned release date is Q1 2010, so releasing this information now is simply an attempt to capture the interest of those looking to buy now/soon<nobr> <wbr></nobr>... with the hope they'll hang off on a purchase until it hits the store shelves.</p></htmltext>
<tokenext>At the end of TFA it states that the planned release date is Q1 2010 , so releasing this information now is simply an attempt to capture the interest of those looking to buy now/soon ... with the hope they 'll hang off on a purchase until it hits the store shelves .</tokentext>
<sentencetext>At the end of TFA it states that the planned release date is Q1 2010, so releasing this information now is simply an attempt to capture the interest of those looking to buy now/soon ... with the hope they'll hang off on a purchase until it hits the store shelves.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807274</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30811504</id>
	<title>I keep waiting for...</title>
	<author>Anonymous</author>
	<datestamp>1263847440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>"But wait, there's still more!"</p></htmltext>
<tokenext>" But wait , there 's still more !
"</tokentext>
<sentencetext>"But wait, there's still more!
"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30809562</id>
	<title>Someone please tell me</title>
	<author>maroberts</author>
	<datestamp>1263838320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>What video card do people recommend you fit in your PC nowadays<br>a) on a budget (say &pound;50)<br>b) average (say &pound;100)<br>c) with a bigger budget (say &pound;250)</p><p>Bonus points if you can recommend a good (fanless) silent video card....</p></htmltext>
<tokenext>What video card do people recommend you fit in your PC nowadaysa ) on a budget ( say   50 ) b ) average ( say   100 ) c ) with a bigger budget ( say   250 ) Bonus points if you can recommend a good ( fanless ) silent video card... .</tokentext>
<sentencetext>What video card do people recommend you fit in your PC nowadaysa) on a budget (say £50)b) average (say £100)c) with a bigger budget (say £250)Bonus points if you can recommend a good (fanless) silent video card....</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30816154</id>
	<title>Re:"The GPU will also be execute C++ code."</title>
	<author>Anonymous</author>
	<datestamp>1263836520000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Slashdot always picks the WORST tech articles. Please stop letting these submitters advertise their shitty websites if they don't understand basic computing.</p></htmltext>
<tokenext>Slashdot always picks the WORST tech articles .
Please stop letting these submitters advertise their shitty websites if they do n't understand basic computing .</tokentext>
<sentencetext>Slashdot always picks the WORST tech articles.
Please stop letting these submitters advertise their shitty websites if they don't understand basic computing.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807430</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808018</id>
	<title>Costs more</title>
	<author>Sycraft-fu</author>
	<datestamp>1263830640000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>The wider your memory bus, the greater the cost. Reason is that it is implemented as more parallel controllers. So you want the smallest one that gets the job done. Also, faster memory gets you nothing if the GPU isn't fast enough to access it. Memory bandwidth and GPU speed are very intertwined. Have memory slower than your GPU needs, and it'll be bottlenecking the GPU. However have it faster, and you gain nothing while increasing cost. So the idea is to get it right at the level that the GPU can make full use of it, but not be slowed down.</p><p>Apparently, 256-bit GDDR5 is enough.</p></htmltext>
<tokenext>The wider your memory bus , the greater the cost .
Reason is that it is implemented as more parallel controllers .
So you want the smallest one that gets the job done .
Also , faster memory gets you nothing if the GPU is n't fast enough to access it .
Memory bandwidth and GPU speed are very intertwined .
Have memory slower than your GPU needs , and it 'll be bottlenecking the GPU .
However have it faster , and you gain nothing while increasing cost .
So the idea is to get it right at the level that the GPU can make full use of it , but not be slowed down.Apparently , 256-bit GDDR5 is enough .</tokentext>
<sentencetext>The wider your memory bus, the greater the cost.
Reason is that it is implemented as more parallel controllers.
So you want the smallest one that gets the job done.
Also, faster memory gets you nothing if the GPU isn't fast enough to access it.
Memory bandwidth and GPU speed are very intertwined.
Have memory slower than your GPU needs, and it'll be bottlenecking the GPU.
However have it faster, and you gain nothing while increasing cost.
So the idea is to get it right at the level that the GPU can make full use of it, but not be slowed down.Apparently, 256-bit GDDR5 is enough.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807458</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807458</id>
	<title>Can someone who is more knowledgeable tell me...</title>
	<author>Anonymous</author>
	<datestamp>1263827400000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>Why it is that they would stick with a 256 bit memory bus (aside from the fact that clock for clock its really the same speed as a 512 bit bus of slower memory?) Is it just because the rest of the card is a bottle neck? I don't think I can recall another card, that when all other things were equal, a faster bit bus didn't result in a sizable increase in processing power? It was obviously implemented in the previous generation of cards, so why not stick with it, use the GDDR5 and then end up with a card thats even faster? <br> <br>

Can anyone explain to me why they would do this (or not do this, depending on how you look at it?)</htmltext>
<tokenext>Why it is that they would stick with a 256 bit memory bus ( aside from the fact that clock for clock its really the same speed as a 512 bit bus of slower memory ?
) Is it just because the rest of the card is a bottle neck ?
I do n't think I can recall another card , that when all other things were equal , a faster bit bus did n't result in a sizable increase in processing power ?
It was obviously implemented in the previous generation of cards , so why not stick with it , use the GDDR5 and then end up with a card thats even faster ?
Can anyone explain to me why they would do this ( or not do this , depending on how you look at it ?
)</tokentext>
<sentencetext>Why it is that they would stick with a 256 bit memory bus (aside from the fact that clock for clock its really the same speed as a 512 bit bus of slower memory?
) Is it just because the rest of the card is a bottle neck?
I don't think I can recall another card, that when all other things were equal, a faster bit bus didn't result in a sizable increase in processing power?
It was obviously implemented in the previous generation of cards, so why not stick with it, use the GDDR5 and then end up with a card thats even faster?
Can anyone explain to me why they would do this (or not do this, depending on how you look at it?
)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807654</id>
	<title>biz;nat3h</title>
	<author>Anonymous</author>
	<datestamp>1263828780000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><A HREF="http://goat.cx/" title="goat.cx" rel="nofollow">One Here but now the bSD license,</a> [goat.cx]</htmltext>
<tokenext>One Here but now the bSD license , [ goat.cx ]</tokentext>
<sentencetext>One Here but now the bSD license, [goat.cx]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30810182</id>
	<title>Re:Tesselation could rescue PC gaming</title>
	<author>Anonymous</author>
	<datestamp>1263841020000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>XBox 360 already has hardware tessellation.</p><p>Tessellation isn't magic, and experts I've talked to don't think the visual differentiation will ever be that extraordinary. I think you might agree if you look at any of the DX11 Tessellation videos that have been posted to YouTube.</p></htmltext>
<tokenext>XBox 360 already has hardware tessellation.Tessellation is n't magic , and experts I 've talked to do n't think the visual differentiation will ever be that extraordinary .
I think you might agree if you look at any of the DX11 Tessellation videos that have been posted to YouTube .</tokentext>
<sentencetext>XBox 360 already has hardware tessellation.Tessellation isn't magic, and experts I've talked to don't think the visual differentiation will ever be that extraordinary.
I think you might agree if you look at any of the DX11 Tessellation videos that have been posted to YouTube.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808374</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807658</id>
	<title>wait a minute...</title>
	<author>buddyglass</author>
	<datestamp>1263828780000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>What happened to GDDR4?</htmltext>
<tokenext>What happened to GDDR4 ?</tokentext>
<sentencetext>What happened to GDDR4?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807356</id>
	<title>Linux</title>
	<author>Anonymous</author>
	<datestamp>1263826560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>...but will it run Linux?</p></htmltext>
<tokenext>...but will it run Linux ?</tokentext>
<sentencetext>...but will it run Linux?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30809986</id>
	<title>Re:What's with the terrible naming</title>
	<author>Anonymous</author>
	<datestamp>1263840180000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Type, Architecture, chip<br>Gpu,  Fermi,           100<br>Gpu,  Tesla,            200</p><p>The actual cards will probably be called something like GTX380 or whatever.</p></htmltext>
<tokenext>Type , Architecture , chipGpu , Fermi , 100Gpu , Tesla , 200The actual cards will probably be called something like GTX380 or whatever .</tokentext>
<sentencetext>Type, Architecture, chipGpu,  Fermi,           100Gpu,  Tesla,            200The actual cards will probably be called something like GTX380 or whatever.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807498</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30814178</id>
	<title>Re:Someone please tell me</title>
	<author>Spatial</author>
	<datestamp>1263817440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><b>A:</b> Nothing.  Save money and get a B type later, these cards are not good value.  Alternatively, try the used market for a cheap Geforce 8800GT or Radeon HD4850, which will serve you pretty well.<br> <br>

<b>B:</b> <a href="http://www.overclockers.co.uk/showproduct.php?prodid=GX-046-GI" title="overclockers.co.uk">Radeon HD4870.  Great card, extremely good value.</a> [overclockers.co.uk] <br> <br>

<b>C:</b> <a href="http://www.overclockers.co.uk/showproduct.php?prodid=GX-148-XF" title="overclockers.co.uk">Radeon HD5850 kicks ass. Diminishing value for money here though.</a> [overclockers.co.uk]</htmltext>
<tokenext>A : Nothing .
Save money and get a B type later , these cards are not good value .
Alternatively , try the used market for a cheap Geforce 8800GT or Radeon HD4850 , which will serve you pretty well .
B : Radeon HD4870 .
Great card , extremely good value .
[ overclockers.co.uk ] C : Radeon HD5850 kicks ass .
Diminishing value for money here though .
[ overclockers.co.uk ]</tokentext>
<sentencetext>A: Nothing.
Save money and get a B type later, these cards are not good value.
Alternatively, try the used market for a cheap Geforce 8800GT or Radeon HD4850, which will serve you pretty well.
B: Radeon HD4870.
Great card, extremely good value.
[overclockers.co.uk]  

C: Radeon HD5850 kicks ass.
Diminishing value for money here though.
[overclockers.co.uk]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30809562</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808914</id>
	<title>Re:Linux</title>
	<author>flex941</author>
	<datestamp>1263835560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>.. it will not even run \_on\_ Linux probably.</htmltext>
<tokenext>.. it will not even run \ _on \ _ Linux probably .</tokentext>
<sentencetext>.. it will not even run \_on\_ Linux probably.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807356</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30815892</id>
	<title>What do you mean by rescue?</title>
	<author>mjwx</author>
	<datestamp>1263833160000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext>The PC market isnt going anywhere. Not even EA is willing to abandon it despite the amount of whinging they do.<blockquote><div><p>Now that graphics are largely stagnant in between console generations</p></div></blockquote><p>

Graphical hardware power is a problem on consoles not PC. Despite their much touted power the PS3 or Xbox360 cannot do FSAA at 1080p. Most developers have resorted to software solutions (hacks, for all intents and purposes) to get rid of jaggedness.<br> <br>

Most games made for consoles will work the same, if not better on a low end PC (if they don't do a crappy job on porting but Xbox to PC this is pretty hard to screw up these days). The problem with PC gaming is that it is not utilised to its fullest extent. Most games are console ports or PC games bought up at about 60\% completion and then consolised.</p><blockquote><div><p>the PC's graphics advantages tend to be limited to higher resolution</p></div></blockquote><p>

PC Graphics 1280x1024 upwards tend to look pretty good. Compare that to Xbox (720p) or PS3 (1080p) which still look pretty bad at those resolutions. Check out the screenshots of Fallout 3 or Far Cry 2, the PC version always looks better no matter the resolution. According the the latest Steam survey 1280x1024 is still the most popular resolution, 1680x1050 the second.</p><blockquote><div><p>anti-aliasing, and somewhat higher texture resolution</p></div></blockquote><p>

If you have the power, why not use it.</p><blockquote><div><p>If the huge new emphasis on tesselation in GF100 strikes a chord with developers</p></div></blockquote><p>

Dont get me wrong however, progress and new idea are a good thing but the PC gaming market is far from in trouble.</p></div>
	</htmltext>
<tokenext>The PC market isnt going anywhere .
Not even EA is willing to abandon it despite the amount of whinging they do.Now that graphics are largely stagnant in between console generations Graphical hardware power is a problem on consoles not PC .
Despite their much touted power the PS3 or Xbox360 can not do FSAA at 1080p .
Most developers have resorted to software solutions ( hacks , for all intents and purposes ) to get rid of jaggedness .
Most games made for consoles will work the same , if not better on a low end PC ( if they do n't do a crappy job on porting but Xbox to PC this is pretty hard to screw up these days ) .
The problem with PC gaming is that it is not utilised to its fullest extent .
Most games are console ports or PC games bought up at about 60 \ % completion and then consolised.the PC 's graphics advantages tend to be limited to higher resolution PC Graphics 1280x1024 upwards tend to look pretty good .
Compare that to Xbox ( 720p ) or PS3 ( 1080p ) which still look pretty bad at those resolutions .
Check out the screenshots of Fallout 3 or Far Cry 2 , the PC version always looks better no matter the resolution .
According the the latest Steam survey 1280x1024 is still the most popular resolution , 1680x1050 the second.anti-aliasing , and somewhat higher texture resolution If you have the power , why not use it.If the huge new emphasis on tesselation in GF100 strikes a chord with developers Dont get me wrong however , progress and new idea are a good thing but the PC gaming market is far from in trouble .</tokentext>
<sentencetext>The PC market isnt going anywhere.
Not even EA is willing to abandon it despite the amount of whinging they do.Now that graphics are largely stagnant in between console generations

Graphical hardware power is a problem on consoles not PC.
Despite their much touted power the PS3 or Xbox360 cannot do FSAA at 1080p.
Most developers have resorted to software solutions (hacks, for all intents and purposes) to get rid of jaggedness.
Most games made for consoles will work the same, if not better on a low end PC (if they don't do a crappy job on porting but Xbox to PC this is pretty hard to screw up these days).
The problem with PC gaming is that it is not utilised to its fullest extent.
Most games are console ports or PC games bought up at about 60\% completion and then consolised.the PC's graphics advantages tend to be limited to higher resolution

PC Graphics 1280x1024 upwards tend to look pretty good.
Compare that to Xbox (720p) or PS3 (1080p) which still look pretty bad at those resolutions.
Check out the screenshots of Fallout 3 or Far Cry 2, the PC version always looks better no matter the resolution.
According the the latest Steam survey 1280x1024 is still the most popular resolution, 1680x1050 the second.anti-aliasing, and somewhat higher texture resolution

If you have the power, why not use it.If the huge new emphasis on tesselation in GF100 strikes a chord with developers

Dont get me wrong however, progress and new idea are a good thing but the PC gaming market is far from in trouble.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808374</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807442</id>
	<title>double-precision</title>
	<author>pigwiggle</author>
	<datestamp>1263827280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>is where it's at for scientific computation.  Folks are moving their codes to GPUs now, betting the double-precision performance will get there soon.  8x increase in compute performance looks promising, assuming it translates into real world gains.</p></htmltext>
<tokenext>is where it 's at for scientific computation .
Folks are moving their codes to GPUs now , betting the double-precision performance will get there soon .
8x increase in compute performance looks promising , assuming it translates into real world gains .</tokentext>
<sentencetext>is where it's at for scientific computation.
Folks are moving their codes to GPUs now, betting the double-precision performance will get there soon.
8x increase in compute performance looks promising, assuming it translates into real world gains.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30813644</id>
	<title>Re:Costs more</title>
	<author>BikeHelmet</author>
	<datestamp>1263814440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Have memory slower than your GPU needs, and it'll be bottlenecking the GPU. However have it faster, and you gain nothing while increasing cost. So the idea is to get it right at the level that the GPU can make full use of it, but not be slowed down.</p></div><p>My old 7900GS was the first card where I felt like the memory wasn't being fully utilized by the GPU.</p><p>It had a near negligible performance impact running 4xAA on most games.</p><p>My next card (8800GS) had a higher framerate, but also a bigger hit from 4xAA.</p></div>
	</htmltext>
<tokenext>Have memory slower than your GPU needs , and it 'll be bottlenecking the GPU .
However have it faster , and you gain nothing while increasing cost .
So the idea is to get it right at the level that the GPU can make full use of it , but not be slowed down.My old 7900GS was the first card where I felt like the memory was n't being fully utilized by the GPU.It had a near negligible performance impact running 4xAA on most games.My next card ( 8800GS ) had a higher framerate , but also a bigger hit from 4xAA .</tokentext>
<sentencetext>Have memory slower than your GPU needs, and it'll be bottlenecking the GPU.
However have it faster, and you gain nothing while increasing cost.
So the idea is to get it right at the level that the GPU can make full use of it, but not be slowed down.My old 7900GS was the first card where I felt like the memory wasn't being fully utilized by the GPU.It had a near negligible performance impact running 4xAA on most games.My next card (8800GS) had a higher framerate, but also a bigger hit from 4xAA.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808018</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807456</id>
	<title>Re:Wait...</title>
	<author>galaad2</author>
	<datestamp>1263827400000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>280W power drain, 550mm^2 chip size =&gt; no thanks, i'll pass.</p><p><a href="http://www.semiaccurate.com/2010/01/17/nvidia-gf100-takes-280w-and-unmanufacturable" title="semiaccurate.com">http://www.semiaccurate.com/2010/01/17/nvidia-gf100-takes-280w-and-unmanufacturable</a> [semiaccurate.com]</p></htmltext>
<tokenext>280W power drain , 550mm ^ 2 chip size = &gt; no thanks , i 'll pass.http : //www.semiaccurate.com/2010/01/17/nvidia-gf100-takes-280w-and-unmanufacturable [ semiaccurate.com ]</tokentext>
<sentencetext>280W power drain, 550mm^2 chip size =&gt; no thanks, i'll pass.http://www.semiaccurate.com/2010/01/17/nvidia-gf100-takes-280w-and-unmanufacturable [semiaccurate.com]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807274</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807274</id>
	<title>Wait...</title>
	<author>Anonymous</author>
	<datestamp>1263826080000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>Why more disclosure now? There doesn't seem to be any major AMD or, gasp, Intel product launch in progress...</p></htmltext>
<tokenext>Why more disclosure now ?
There does n't seem to be any major AMD or , gasp , Intel product launch in progress.. .</tokentext>
<sentencetext>Why more disclosure now?
There doesn't seem to be any major AMD or, gasp, Intel product launch in progress...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807368</id>
	<title>Should AMD sue them too?</title>
	<author>Anonymous</author>
	<datestamp>1263826680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>For making the better GPU?<nobr> <wbr></nobr>:P</p></htmltext>
<tokenext>For making the better GPU ?
: P</tokentext>
<sentencetext>For making the better GPU?
:P</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30811276</id>
	<title>Re:Tesselation could rescue PC gaming</title>
	<author>John Whitley</author>
	<datestamp>1263846540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Now that graphics are largely stagnant in between console generations</p></div><p>I'm afraid that you've lost me.  XBox to XBox 360, PS2 to PS3, both represent substantial leaps in graphics performance.  In the XBox/PS2 generation, game teams clearly had to fight to allocate polygon budgets well, and it was quite visible in the end result.  That's not so much the case in current generation consoles.  It's also telling that transitions between in-game scenes and pre-rendered content aren't nearly as jarringly obvious as they used to be.  And let's not forget the higher resolutions that current consoles are expected to seamlessly tackle.</p><p>Console makers won't be interested in stopping this trend until cost or engineering concerns block them, or until customers stop caring.  I expect the latter to happen only around the time uncanney-valley-crossed photorealistic scenes can be simulated and rendered in realtime.</p></div>
	</htmltext>
<tokenext>Now that graphics are largely stagnant in between console generationsI 'm afraid that you 've lost me .
XBox to XBox 360 , PS2 to PS3 , both represent substantial leaps in graphics performance .
In the XBox/PS2 generation , game teams clearly had to fight to allocate polygon budgets well , and it was quite visible in the end result .
That 's not so much the case in current generation consoles .
It 's also telling that transitions between in-game scenes and pre-rendered content are n't nearly as jarringly obvious as they used to be .
And let 's not forget the higher resolutions that current consoles are expected to seamlessly tackle.Console makers wo n't be interested in stopping this trend until cost or engineering concerns block them , or until customers stop caring .
I expect the latter to happen only around the time uncanney-valley-crossed photorealistic scenes can be simulated and rendered in realtime .</tokentext>
<sentencetext>Now that graphics are largely stagnant in between console generationsI'm afraid that you've lost me.
XBox to XBox 360, PS2 to PS3, both represent substantial leaps in graphics performance.
In the XBox/PS2 generation, game teams clearly had to fight to allocate polygon budgets well, and it was quite visible in the end result.
That's not so much the case in current generation consoles.
It's also telling that transitions between in-game scenes and pre-rendered content aren't nearly as jarringly obvious as they used to be.
And let's not forget the higher resolutions that current consoles are expected to seamlessly tackle.Console makers won't be interested in stopping this trend until cost or engineering concerns block them, or until customers stop caring.
I expect the latter to happen only around the time uncanney-valley-crossed photorealistic scenes can be simulated and rendered in realtime.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808374</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30809890</id>
	<title>Re:Someone please tell me</title>
	<author>Anonymous</author>
	<datestamp>1263839760000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext>What's that funny squiggly L-shaped thing where the dollar sign is supposed to be?<nobr> <wbr></nobr>:-P
<br> <br>
I'm running a single Radeon 4850 and have no problem with it whatsoever.
<br> <br>
A friend of mine is running two GeForce 260 cards in SLI mode which make his system operate at roughly the same temperature as the surface of the sun.
<br> <br>
We both play the same modern first person shooter games.  If you bring up the numbers, he might get 80fps compared to my 65fps.  However I honestly cannot notice any difference.
<br> <br>
The real difference is that he spent over $400 (approx. squiggly L-shape650) compared to my $125 (approx. squiggly L-shape230) and has to open the windows to his room in the middle of winter to cool it down.</htmltext>
<tokenext>What 's that funny squiggly L-shaped thing where the dollar sign is supposed to be ?
: -P I 'm running a single Radeon 4850 and have no problem with it whatsoever .
A friend of mine is running two GeForce 260 cards in SLI mode which make his system operate at roughly the same temperature as the surface of the sun .
We both play the same modern first person shooter games .
If you bring up the numbers , he might get 80fps compared to my 65fps .
However I honestly can not notice any difference .
The real difference is that he spent over $ 400 ( approx .
squiggly L-shape650 ) compared to my $ 125 ( approx .
squiggly L-shape230 ) and has to open the windows to his room in the middle of winter to cool it down .</tokentext>
<sentencetext>What's that funny squiggly L-shaped thing where the dollar sign is supposed to be?
:-P
 
I'm running a single Radeon 4850 and have no problem with it whatsoever.
A friend of mine is running two GeForce 260 cards in SLI mode which make his system operate at roughly the same temperature as the surface of the sun.
We both play the same modern first person shooter games.
If you bring up the numbers, he might get 80fps compared to my 65fps.
However I honestly cannot notice any difference.
The real difference is that he spent over $400 (approx.
squiggly L-shape650) compared to my $125 (approx.
squiggly L-shape230) and has to open the windows to his room in the middle of winter to cool it down.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30809562</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807430</id>
	<title>"The GPU will also be execute C++ code."</title>
	<author>maxwell demon</author>
	<datestamp>1263827220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>From the article:<br>"The GPU will also be execute C++ code."</p><p>They integrate a C++ interpreter (or JIT compiler) into their graphics chip?</p></htmltext>
<tokenext>From the article : " The GPU will also be execute C + + code .
" They integrate a C + + interpreter ( or JIT compiler ) into their graphics chip ?</tokentext>
<sentencetext>From the article:"The GPU will also be execute C++ code.
"They integrate a C++ interpreter (or JIT compiler) into their graphics chip?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807936</id>
	<title>Re:What's with the terrible naming</title>
	<author>Zantetsuken</author>
	<datestamp>1263830160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>No, no, no... the GF abbreviation is for "Girl Friend 100"</htmltext>
<tokenext>No , no , no... the GF abbreviation is for " Girl Friend 100 "</tokentext>
<sentencetext>No, no, no... the GF abbreviation is for "Girl Friend 100"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807498</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808108</id>
	<title>Re:wait a minute...</title>
	<author>Spatial</author>
	<datestamp>1263831120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>According to Wikipedia, it was used with some of AMD's 1000 and 2000 series GPUs.</htmltext>
<tokenext>According to Wikipedia , it was used with some of AMD 's 1000 and 2000 series GPUs .</tokentext>
<sentencetext>According to Wikipedia, it was used with some of AMD's 1000 and 2000 series GPUs.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807658</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30811242</id>
	<title>Beware the future upgrade</title>
	<author>Anonymous</author>
	<datestamp>1263846420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>from GF100 to MRS100.</htmltext>
<tokenext>from GF100 to MRS100 .</tokentext>
<sentencetext>from GF100 to MRS100.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807702</id>
	<title>Re:"The GPU will also be execute C++ code."</title>
	<author>dskzero</author>
	<datestamp>1263828960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>That's rather ambiguous.</htmltext>
<tokenext>That 's rather ambiguous .</tokentext>
<sentencetext>That's rather ambiguous.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807430</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807874</id>
	<title>Re:wait a minute...</title>
	<author>grimJester</author>
	<datestamp>1263829860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Well, they could have gone to GDDR4, <a href="http://www.theonion.com/content/node/33930" title="theonion.com">but thought...</a> [theonion.com]</htmltext>
<tokenext>Well , they could have gone to GDDR4 , but thought... [ theonion.com ]</tokentext>
<sentencetext>Well, they could have gone to GDDR4, but thought... [theonion.com]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807658</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30810736</id>
	<title>Re:Tesselation could rescue PC gaming</title>
	<author>crc79</author>
	<datestamp>1263843720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I remember seeing something like this in the old game "Sacrifice". I wonder if their method was similar...</p></htmltext>
<tokenext>I remember seeing something like this in the old game " Sacrifice " .
I wonder if their method was similar.. .</tokentext>
<sentencetext>I remember seeing something like this in the old game "Sacrifice".
I wonder if their method was similar...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808374</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807398</id>
	<title>Anandtech</title>
	<author>SpeedyDX</author>
	<datestamp>1263826920000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>Anandtech also has an article up about the GF100. They generally have very well written, in-depth articles: <a href="http://www.anandtech.com/video/showdoc.aspx?i=3721" title="anandtech.com">http://www.anandtech.com/video/showdoc.aspx?i=3721</a> [anandtech.com]</p></htmltext>
<tokenext>Anandtech also has an article up about the GF100 .
They generally have very well written , in-depth articles : http : //www.anandtech.com/video/showdoc.aspx ? i = 3721 [ anandtech.com ]</tokentext>
<sentencetext>Anandtech also has an article up about the GF100.
They generally have very well written, in-depth articles: http://www.anandtech.com/video/showdoc.aspx?i=3721 [anandtech.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808614</id>
	<title>Re:Wait...</title>
	<author>Hal\_Porter</author>
	<datestamp>1263833940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The engineers had a cool idea and the asked the sales guys. And the sales guys said "Dude that's fucking awesome! What are you waiting for? Stick it on the web?"</p></htmltext>
<tokenext>The engineers had a cool idea and the asked the sales guys .
And the sales guys said " Dude that 's fucking awesome !
What are you waiting for ?
Stick it on the web ?
"</tokentext>
<sentencetext>The engineers had a cool idea and the asked the sales guys.
And the sales guys said "Dude that's fucking awesome!
What are you waiting for?
Stick it on the web?
"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807274</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807374</id>
	<title>Wow, that article is terribly written...</title>
	<author>dskzero</author>
	<datestamp>1263826680000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext>I understand most of the time people who write about computers aren't exactly literature graduates, but wtf, at least write correctly. Use some spell checker or have someone proof read it.</htmltext>
<tokenext>I understand most of the time people who write about computers are n't exactly literature graduates , but wtf , at least write correctly .
Use some spell checker or have someone proof read it .</tokentext>
<sentencetext>I understand most of the time people who write about computers aren't exactly literature graduates, but wtf, at least write correctly.
Use some spell checker or have someone proof read it.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808754</id>
	<title>Re:What's with the terrible naming</title>
	<author>TheKidWho</author>
	<datestamp>1263834900000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>GF100 is the name of the chip.  The cards will be called the GT300 series.</p></htmltext>
<tokenext>GF100 is the name of the chip .
The cards will be called the GT300 series .</tokentext>
<sentencetext>GF100 is the name of the chip.
The cards will be called the GT300 series.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807498</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807670</id>
	<title>Re:"The GPU will also be execute C++ code."</title>
	<author>LordKronos</author>
	<datestamp>1263828840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Without more details, I suspect they've just made a more capable language that lets you write your shaders and stuff in something that looks just like C++.</p></htmltext>
<tokenext>Without more details , I suspect they 've just made a more capable language that lets you write your shaders and stuff in something that looks just like C + + .</tokentext>
<sentencetext>Without more details, I suspect they've just made a more capable language that lets you write your shaders and stuff in something that looks just like C++.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807430</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808374</id>
	<title>Tesselation could rescue PC gaming</title>
	<author>Anonymous</author>
	<datestamp>1263832620000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext>Now that graphics are largely stagnant in between console generations, the PC's graphics advantages tend to be limited to higher resolution, higher framerate, anti-aliasing, and somewhat higher texture resolution.  If the huge new emphasis on tesselation in GF100 strikes a chord with developers, and especially if something like it gets into the next console generation, games may ship with much more detailed geometry which will then automatically scale to the performance of the hardware on which they're run.  This would allow PC graphics to gain the additional advantage of having an order of magnitude increase in geometry detail, which would make more of a visible difference than any of the advantages it currently has, and it would occur with virtually no extra work by developers.  It would also allow performance to scale much more effectively across a wide range of PC hardware, allowing developers to simultaneously hit the casual and enthusiast markets much more effectively.</htmltext>
<tokenext>Now that graphics are largely stagnant in between console generations , the PC 's graphics advantages tend to be limited to higher resolution , higher framerate , anti-aliasing , and somewhat higher texture resolution .
If the huge new emphasis on tesselation in GF100 strikes a chord with developers , and especially if something like it gets into the next console generation , games may ship with much more detailed geometry which will then automatically scale to the performance of the hardware on which they 're run .
This would allow PC graphics to gain the additional advantage of having an order of magnitude increase in geometry detail , which would make more of a visible difference than any of the advantages it currently has , and it would occur with virtually no extra work by developers .
It would also allow performance to scale much more effectively across a wide range of PC hardware , allowing developers to simultaneously hit the casual and enthusiast markets much more effectively .</tokentext>
<sentencetext>Now that graphics are largely stagnant in between console generations, the PC's graphics advantages tend to be limited to higher resolution, higher framerate, anti-aliasing, and somewhat higher texture resolution.
If the huge new emphasis on tesselation in GF100 strikes a chord with developers, and especially if something like it gets into the next console generation, games may ship with much more detailed geometry which will then automatically scale to the performance of the hardware on which they're run.
This would allow PC graphics to gain the additional advantage of having an order of magnitude increase in geometry detail, which would make more of a visible difference than any of the advantages it currently has, and it would occur with virtually no extra work by developers.
It would also allow performance to scale much more effectively across a wide range of PC hardware, allowing developers to simultaneously hit the casual and enthusiast markets much more effectively.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807732</id>
	<title>Re:Wait...</title>
	<author>Anonymous</author>
	<datestamp>1263829080000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I'll pass, too. I prefer them to be perki, not just fermi, on my GF.</p></htmltext>
<tokenext>I 'll pass , too .
I prefer them to be perki , not just fermi , on my GF .</tokentext>
<sentencetext>I'll pass, too.
I prefer them to be perki, not just fermi, on my GF.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807456</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_18_1336241_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30816154
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807430
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_18_1336241_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807670
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807430
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_18_1336241_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30814504
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808374
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_18_1336241_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807732
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807456
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807274
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_18_1336241_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807702
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807430
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_18_1336241_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808754
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807498
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_18_1336241_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807436
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807274
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_18_1336241_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30814178
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30809562
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_18_1336241_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30813644
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808018
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807458
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_18_1336241_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807874
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807658
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_18_1336241_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30811524
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807658
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_18_1336241_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30809890
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30809562
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_18_1336241_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808108
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807658
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_18_1336241_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30811276
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808374
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_18_1336241_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30809986
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807498
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_18_1336241_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808850
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807456
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807274
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_18_1336241_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807936
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807498
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_18_1336241_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808914
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807356
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_18_1336241_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30815892
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808374
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_18_1336241_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808614
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807274
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_18_1336241_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30810736
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808374
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_18_1336241_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30810182
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808374
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_18_1336241.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807658
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30811524
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807874
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808108
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_18_1336241.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807458
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808018
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30813644
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_18_1336241.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807274
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807456
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808850
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807732
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807436
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808614
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_18_1336241.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807368
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_18_1336241.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30811242
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_18_1336241.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807430
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807670
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30816154
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807702
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_18_1336241.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807398
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_18_1336241.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808374
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30814504
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30810736
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30815892
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30810182
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30811276
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_18_1336241.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807498
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807936
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808754
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30809986
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_18_1336241.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30809562
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30809890
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30814178
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_18_1336241.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30807356
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_18_1336241.30808914
</commentlist>
</conversation>
