<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article10_03_27_0118233</id>
	<title>Nvidia's GF100 Turns Into GeForce GTX 480 and 470</title>
	<author>timothy</author>
	<datestamp>1269705120000</datestamp>
	<htmltext>crazipper writes <i>"After months of <a href="http://www.tomshardware.com/reviews/gf100-fermi-directx-11,2536.html">talking architecture and functionality</a>, Nvidia is finally going public with the performance of its $500 GeForce GTX 480 and $350 GeForce GTX 470 graphics cards, both derived from the company's first DirectX 11-capable GPU, GF100. Tom's Hardware just posted <a href="http://www.tomshardware.com/reviews/geforce-gtx-480,2585.html">a comprehensive look at the new cards</a>, including their power requirements and performance attributes. Two GTX 480s in SLI seem to scale impressively well &mdash; providing you have $1,000 for graphics, a beefy power supply, and a case with lots of airflow."</i></htmltext>
<tokenext>crazipper writes " After months of talking architecture and functionality , Nvidia is finally going public with the performance of its $ 500 GeForce GTX 480 and $ 350 GeForce GTX 470 graphics cards , both derived from the company 's first DirectX 11-capable GPU , GF100 .
Tom 's Hardware just posted a comprehensive look at the new cards , including their power requirements and performance attributes .
Two GTX 480s in SLI seem to scale impressively well    providing you have $ 1,000 for graphics , a beefy power supply , and a case with lots of airflow .
"</tokentext>
<sentencetext>crazipper writes "After months of talking architecture and functionality, Nvidia is finally going public with the performance of its $500 GeForce GTX 480 and $350 GeForce GTX 470 graphics cards, both derived from the company's first DirectX 11-capable GPU, GF100.
Tom's Hardware just posted a comprehensive look at the new cards, including their power requirements and performance attributes.
Two GTX 480s in SLI seem to scale impressively well — providing you have $1,000 for graphics, a beefy power supply, and a case with lots of airflow.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637508</id>
	<title>Re:This is why we need the on-live service to succ</title>
	<author>chazwurth</author>
	<datestamp>1269721260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>And you know who buys the top of the line super expensive cards?  Pretty much no one.</p></div><p>Then why can't supply satisfy demand? Prices on all the enthusiast-oriented cards have been going up for months, and if you really want a top-end card (5970 for example), it's really hard to find one.</p><p>Monitors are getting bigger and cheaper, and a lot of people want to play at (minimally) 1900x1200 at high settings. For newer games that takes expensive cards.</p></div>
	</htmltext>
<tokenext>And you know who buys the top of the line super expensive cards ?
Pretty much no one.Then why ca n't supply satisfy demand ?
Prices on all the enthusiast-oriented cards have been going up for months , and if you really want a top-end card ( 5970 for example ) , it 's really hard to find one.Monitors are getting bigger and cheaper , and a lot of people want to play at ( minimally ) 1900x1200 at high settings .
For newer games that takes expensive cards .</tokentext>
<sentencetext>And you know who buys the top of the line super expensive cards?
Pretty much no one.Then why can't supply satisfy demand?
Prices on all the enthusiast-oriented cards have been going up for months, and if you really want a top-end card (5970 for example), it's really hard to find one.Monitors are getting bigger and cheaper, and a lot of people want to play at (minimally) 1900x1200 at high settings.
For newer games that takes expensive cards.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637338</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636538</id>
	<title>Loop</title>
	<author>Anonymous</author>
	<datestamp>1269622560000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>-1</modscore>
	<htmltext>Poo poo poo I'm stuck in a poo loop</htmltext>
<tokenext>Poo poo poo I 'm stuck in a poo loop</tokentext>
<sentencetext>Poo poo poo I'm stuck in a poo loop</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637940</id>
	<title>good card for playing with GPGPU?</title>
	<author>FuckingNickName</author>
	<datestamp>1269686160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Per subject, what would be a reasonable card for playing with GPGPU tech (under Win7)? I have been thinking about the GT220 or GT240, and while I am bombarded with reviews by Top Elite gamer sites indicating that these are low to mid range cards, as far as I can tell they basically do what the higher range cards do, but with fewer cores/less memory/slower clock. And the only significant thing I might be missing out on is double precision arithmetic.</p><p>Of course, I am likely to be wrong... what else would I not be able to play with GPGPU-wise by considering a $80ish card rather than a $1000 one?</p></htmltext>
<tokenext>Per subject , what would be a reasonable card for playing with GPGPU tech ( under Win7 ) ?
I have been thinking about the GT220 or GT240 , and while I am bombarded with reviews by Top Elite gamer sites indicating that these are low to mid range cards , as far as I can tell they basically do what the higher range cards do , but with fewer cores/less memory/slower clock .
And the only significant thing I might be missing out on is double precision arithmetic.Of course , I am likely to be wrong... what else would I not be able to play with GPGPU-wise by considering a $ 80ish card rather than a $ 1000 one ?</tokentext>
<sentencetext>Per subject, what would be a reasonable card for playing with GPGPU tech (under Win7)?
I have been thinking about the GT220 or GT240, and while I am bombarded with reviews by Top Elite gamer sites indicating that these are low to mid range cards, as far as I can tell they basically do what the higher range cards do, but with fewer cores/less memory/slower clock.
And the only significant thing I might be missing out on is double precision arithmetic.Of course, I am likely to be wrong... what else would I not be able to play with GPGPU-wise by considering a $80ish card rather than a $1000 one?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31639234</id>
	<title>CS5 ?</title>
	<author>yakumo.unr</author>
	<datestamp>1269703440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Will the Adobe CS5 Mercury Playback Engine run on this or are they really locking it JUST to Quadro's ?</p></htmltext>
<tokenext>Will the Adobe CS5 Mercury Playback Engine run on this or are they really locking it JUST to Quadro 's ?</tokentext>
<sentencetext>Will the Adobe CS5 Mercury Playback Engine run on this or are they really locking it JUST to Quadro's ?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31642000</id>
	<title>Folding@Home</title>
	<author>shino\_crowell</author>
	<datestamp>1269722640000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>For most gaming applications ATI ran away with this round in the price/performance category.  For F@H though, I think this is going to be a very interesting card, Nvidia just folds better than ATI.  There are numerous reasons for this, and finger-pointing is futile, but thats the cold hard fact.  The extended time that software-side engineers have had to play around with CUDA seems to have been beneficial.  In time, and with work on their OpenCL implementation, I think the current generation Radeons will catch up, but not for a while. I'm mostly interested in seeing how this card performs against the GTX 295, currently the best single PCB GPU folder.  If the retail prices of the GTX 470, with its optimized CUDA Cores, stays within the $350-$400 range I'd love to pick one up to play with.

Do not take this as an endorsement for either company.  I simply choose the best hardware to fit my specific needs: ATI for gaming, Nvidia for F@H</htmltext>
<tokenext>For most gaming applications ATI ran away with this round in the price/performance category .
For F @ H though , I think this is going to be a very interesting card , Nvidia just folds better than ATI .
There are numerous reasons for this , and finger-pointing is futile , but thats the cold hard fact .
The extended time that software-side engineers have had to play around with CUDA seems to have been beneficial .
In time , and with work on their OpenCL implementation , I think the current generation Radeons will catch up , but not for a while .
I 'm mostly interested in seeing how this card performs against the GTX 295 , currently the best single PCB GPU folder .
If the retail prices of the GTX 470 , with its optimized CUDA Cores , stays within the $ 350- $ 400 range I 'd love to pick one up to play with .
Do not take this as an endorsement for either company .
I simply choose the best hardware to fit my specific needs : ATI for gaming , Nvidia for F @ H</tokentext>
<sentencetext>For most gaming applications ATI ran away with this round in the price/performance category.
For F@H though, I think this is going to be a very interesting card, Nvidia just folds better than ATI.
There are numerous reasons for this, and finger-pointing is futile, but thats the cold hard fact.
The extended time that software-side engineers have had to play around with CUDA seems to have been beneficial.
In time, and with work on their OpenCL implementation, I think the current generation Radeons will catch up, but not for a while.
I'm mostly interested in seeing how this card performs against the GTX 295, currently the best single PCB GPU folder.
If the retail prices of the GTX 470, with its optimized CUDA Cores, stays within the $350-$400 range I'd love to pick one up to play with.
Do not take this as an endorsement for either company.
I simply choose the best hardware to fit my specific needs: ATI for gaming, Nvidia for F@H</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636980</id>
	<title>Better perf than this shows?</title>
	<author>Vigile</author>
	<datestamp>1269626280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Another review here points to slightly more of a performance edge to the GTX 480 and 470:<br><a href="http://www.pcper.com/article.php?aid=888" title="pcper.com">http://www.pcper.com/article.php?aid=888</a> [pcper.com]</p></htmltext>
<tokenext>Another review here points to slightly more of a performance edge to the GTX 480 and 470 : http : //www.pcper.com/article.php ? aid = 888 [ pcper.com ]</tokentext>
<sentencetext>Another review here points to slightly more of a performance edge to the GTX 480 and 470:http://www.pcper.com/article.php?aid=888 [pcper.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637338</id>
	<title>Re:This is why we need the on-live service to succ</title>
	<author>Totenglocke</author>
	<datestamp>1269631200000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>And you know who buys the top of the line super expensive cards?  Pretty much no one.  Everyone else either buys a mid-range card or last years top of the line.  Both of those will last you a few years and the all around computer cost is less than a console.</p><p>Don't believe me that consoles are more expensive?  I'm a PC gamer (who occasionally plays console games) and a friend of mine is a console gamer (who occasionally plays PC games).  He tries to use your argument about "it's expensive with upgrading your computer", yet he ignores the fact that 1) console games virtually never go down in price, where PC games drop in price very quickly after the first few months and 2) Consoles nickel and dime you to death.  We actually sat down and did the math one time and for his Wii, 360, PS3 and enough controllers for 4 players on each, it came out to over $2,500 for just the console hardware.  You can easily buy two very good gaming systems for less money over the course of the lifespan of a console generation.</p><p>So no, people don't turn to consoles because they're cheaper, people turn to consoles because they can't do basic math.</p></htmltext>
<tokenext>And you know who buys the top of the line super expensive cards ?
Pretty much no one .
Everyone else either buys a mid-range card or last years top of the line .
Both of those will last you a few years and the all around computer cost is less than a console.Do n't believe me that consoles are more expensive ?
I 'm a PC gamer ( who occasionally plays console games ) and a friend of mine is a console gamer ( who occasionally plays PC games ) .
He tries to use your argument about " it 's expensive with upgrading your computer " , yet he ignores the fact that 1 ) console games virtually never go down in price , where PC games drop in price very quickly after the first few months and 2 ) Consoles nickel and dime you to death .
We actually sat down and did the math one time and for his Wii , 360 , PS3 and enough controllers for 4 players on each , it came out to over $ 2,500 for just the console hardware .
You can easily buy two very good gaming systems for less money over the course of the lifespan of a console generation.So no , people do n't turn to consoles because they 're cheaper , people turn to consoles because they ca n't do basic math .</tokentext>
<sentencetext>And you know who buys the top of the line super expensive cards?
Pretty much no one.
Everyone else either buys a mid-range card or last years top of the line.
Both of those will last you a few years and the all around computer cost is less than a console.Don't believe me that consoles are more expensive?
I'm a PC gamer (who occasionally plays console games) and a friend of mine is a console gamer (who occasionally plays PC games).
He tries to use your argument about "it's expensive with upgrading your computer", yet he ignores the fact that 1) console games virtually never go down in price, where PC games drop in price very quickly after the first few months and 2) Consoles nickel and dime you to death.
We actually sat down and did the math one time and for his Wii, 360, PS3 and enough controllers for 4 players on each, it came out to over $2,500 for just the console hardware.
You can easily buy two very good gaming systems for less money over the course of the lifespan of a console generation.So no, people don't turn to consoles because they're cheaper, people turn to consoles because they can't do basic math.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637102</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636610</id>
	<title>Crap Hardware vs. Crap Drivers? Is that it atm?</title>
	<author>Anonymous</author>
	<datestamp>1269623220000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>So I can choose between nice 20W idle with ATI, but shit windows and goddamn awful linux drivers with only outdated X.org / kernel support for the cards.</p><p>Or this power hungry overpriced heater (yay, summer is coming), which at least has decent drivers.</p><p>The Free Market has failed us! Damn commies!</p></htmltext>
<tokenext>So I can choose between nice 20W idle with ATI , but shit windows and goddamn awful linux drivers with only outdated X.org / kernel support for the cards.Or this power hungry overpriced heater ( yay , summer is coming ) , which at least has decent drivers.The Free Market has failed us !
Damn commies !</tokentext>
<sentencetext>So I can choose between nice 20W idle with ATI, but shit windows and goddamn awful linux drivers with only outdated X.org / kernel support for the cards.Or this power hungry overpriced heater (yay, summer is coming), which at least has decent drivers.The Free Market has failed us!
Damn commies!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31642616</id>
	<title>Re:So...</title>
	<author>Hurricane78</author>
	<datestamp>1269684360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Actually that&rsquo;s close to the perfect temperature for a steak or roast. The meat will be godlike. But it will take hours.<br>A great steak after a good fight... that would be something I could get used to. Caveman hunter style!<nobr> <wbr></nobr>:D</p></htmltext>
<tokenext>Actually that    s close to the perfect temperature for a steak or roast .
The meat will be godlike .
But it will take hours.A great steak after a good fight... that would be something I could get used to .
Caveman hunter style !
: D</tokentext>
<sentencetext>Actually that’s close to the perfect temperature for a steak or roast.
The meat will be godlike.
But it will take hours.A great steak after a good fight... that would be something I could get used to.
Caveman hunter style!
:D</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636616</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31639784</id>
	<title>Nice. Too bad nobody makes PC Games anymore...</title>
	<author>rtrifts</author>
	<datestamp>1269707820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Without putting too fine a point on it, hardware like this used to be pretty cool. I have had several GTX 260 and a Asus 4870  for the past 1.5 years. I've even got two M1710 laptops with SLI. Truth is, I've yet to really flex the muscles on *any* of this hardware since I've owned it.</p><p>There just aren't many Triple-A PC titles being made these days;  let alone any that benefit much from hardware like this.</p><p>It would be very cool if there *were* such titles. But there aren't. Worse, there are not many coming into focus on the horizon, either. I suppose we can all hope the system requirements and eye candy in Star Wars: The Old Republic and in Diablo III will shine with this hardware.</p><p>But I wouldn't bet on it.  So we buy one of these to play the Witcher II? Then what?</p><p>Hardware like this is a solution looking for a problem. And that IS the problem.</p></htmltext>
<tokenext>Without putting too fine a point on it , hardware like this used to be pretty cool .
I have had several GTX 260 and a Asus 4870 for the past 1.5 years .
I 've even got two M1710 laptops with SLI .
Truth is , I 've yet to really flex the muscles on * any * of this hardware since I 've owned it.There just are n't many Triple-A PC titles being made these days ; let alone any that benefit much from hardware like this.It would be very cool if there * were * such titles .
But there are n't .
Worse , there are not many coming into focus on the horizon , either .
I suppose we can all hope the system requirements and eye candy in Star Wars : The Old Republic and in Diablo III will shine with this hardware.But I would n't bet on it .
So we buy one of these to play the Witcher II ?
Then what ? Hardware like this is a solution looking for a problem .
And that IS the problem .</tokentext>
<sentencetext>Without putting too fine a point on it, hardware like this used to be pretty cool.
I have had several GTX 260 and a Asus 4870  for the past 1.5 years.
I've even got two M1710 laptops with SLI.
Truth is, I've yet to really flex the muscles on *any* of this hardware since I've owned it.There just aren't many Triple-A PC titles being made these days;  let alone any that benefit much from hardware like this.It would be very cool if there *were* such titles.
But there aren't.
Worse, there are not many coming into focus on the horizon, either.
I suppose we can all hope the system requirements and eye candy in Star Wars: The Old Republic and in Diablo III will shine with this hardware.But I wouldn't bet on it.
So we buy one of these to play the Witcher II?
Then what?Hardware like this is a solution looking for a problem.
And that IS the problem.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637546</id>
	<title>Re:This is why we need the on-live service to succ</title>
	<author>PhunkySchtuff</author>
	<datestamp>1269721980000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>We actually sat down and did the math one time and for his Wii, 360, PS3 and enough controllers for 4 players on each, it came out to over $2,500 for just the console hardware.  You can easily buy two very good gaming systems for less money over the course of the lifespan of a console generation.</p></div><p>So you can buy two PCs (that can have one, or at most two people playing at once) or you can buy three consoles and enough peripheral hardware to have four people playing at once on each console and... consoles are more expensive?</p><p>Consoles are also more convenient. Turn it on. Put in a disc, or load a game off the hard drive. Play. Turn it off. Easy.</p></div>
	</htmltext>
<tokenext>We actually sat down and did the math one time and for his Wii , 360 , PS3 and enough controllers for 4 players on each , it came out to over $ 2,500 for just the console hardware .
You can easily buy two very good gaming systems for less money over the course of the lifespan of a console generation.So you can buy two PCs ( that can have one , or at most two people playing at once ) or you can buy three consoles and enough peripheral hardware to have four people playing at once on each console and... consoles are more expensive ? Consoles are also more convenient .
Turn it on .
Put in a disc , or load a game off the hard drive .
Play. Turn it off .
Easy .</tokentext>
<sentencetext>We actually sat down and did the math one time and for his Wii, 360, PS3 and enough controllers for 4 players on each, it came out to over $2,500 for just the console hardware.
You can easily buy two very good gaming systems for less money over the course of the lifespan of a console generation.So you can buy two PCs (that can have one, or at most two people playing at once) or you can buy three consoles and enough peripheral hardware to have four people playing at once on each console and... consoles are more expensive?Consoles are also more convenient.
Turn it on.
Put in a disc, or load a game off the hard drive.
Play. Turn it off.
Easy.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637338</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637328</id>
	<title>Re:Bleeding edge isn't usually worth it</title>
	<author>Anonymous</author>
	<datestamp>1269631020000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Not true if you want to run new games in DX10/DX11 with the latest eyecandy.</p><p>Right now that's... Just Cause 2, Metro 2033, Shattered Horizon and a couple of others.</p><p>Within the lifetime of a GPU you'd buy today, there will be tons more. So unless you are happy to keep playing in DX9 mode (possible in some cases, but not always), I'd say Radeon HD 5850 is easily justifiable today ($300 or so) assuming you will use your GPU for 2-3 years.</p><p>$400+ GPUs are pretty much pointless, I agree there. The only point would be to run massive triple-screen setups and I think at that point the price of the GPU is the lesser issue and the price of three good quality monitors is the issue<nobr> <wbr></nobr>:D</p></htmltext>
<tokenext>Not true if you want to run new games in DX10/DX11 with the latest eyecandy.Right now that 's... Just Cause 2 , Metro 2033 , Shattered Horizon and a couple of others.Within the lifetime of a GPU you 'd buy today , there will be tons more .
So unless you are happy to keep playing in DX9 mode ( possible in some cases , but not always ) , I 'd say Radeon HD 5850 is easily justifiable today ( $ 300 or so ) assuming you will use your GPU for 2-3 years. $ 400 + GPUs are pretty much pointless , I agree there .
The only point would be to run massive triple-screen setups and I think at that point the price of the GPU is the lesser issue and the price of three good quality monitors is the issue : D</tokentext>
<sentencetext>Not true if you want to run new games in DX10/DX11 with the latest eyecandy.Right now that's... Just Cause 2, Metro 2033, Shattered Horizon and a couple of others.Within the lifetime of a GPU you'd buy today, there will be tons more.
So unless you are happy to keep playing in DX9 mode (possible in some cases, but not always), I'd say Radeon HD 5850 is easily justifiable today ($300 or so) assuming you will use your GPU for 2-3 years.$400+ GPUs are pretty much pointless, I agree there.
The only point would be to run massive triple-screen setups and I think at that point the price of the GPU is the lesser issue and the price of three good quality monitors is the issue :D</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636820</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31640470</id>
	<title>Re:Fermi needs a refresh or v2</title>
	<author>Suiggy</author>
	<datestamp>1269712020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>In the AnandTech review the GTX400 is 2x-10x faster than the GTX 285 or Radeon 5870.</p></div><p>You were looking at GTX 480 SLI which means that it's two GTX 480s. A single GTX 480 is only marginally better than an HD5870 in the majority of benchmarks, and costs $100 more to boot. A crossfire HD5970 system would out perform a GTX 480 SLI system.

I guess you also haven't heard of Metro 2033, STALKER: Call of Prypiat, or Shattered Horizon, all of which are very demanding games. And there will be more demanding games out later this year (Rage, Deus Ex 3, Crysis 2).

I'm calling you a troll.</p></div>
	</htmltext>
<tokenext>In the AnandTech review the GTX400 is 2x-10x faster than the GTX 285 or Radeon 5870.You were looking at GTX 480 SLI which means that it 's two GTX 480s .
A single GTX 480 is only marginally better than an HD5870 in the majority of benchmarks , and costs $ 100 more to boot .
A crossfire HD5970 system would out perform a GTX 480 SLI system .
I guess you also have n't heard of Metro 2033 , STALKER : Call of Prypiat , or Shattered Horizon , all of which are very demanding games .
And there will be more demanding games out later this year ( Rage , Deus Ex 3 , Crysis 2 ) .
I 'm calling you a troll .</tokentext>
<sentencetext>In the AnandTech review the GTX400 is 2x-10x faster than the GTX 285 or Radeon 5870.You were looking at GTX 480 SLI which means that it's two GTX 480s.
A single GTX 480 is only marginally better than an HD5870 in the majority of benchmarks, and costs $100 more to boot.
A crossfire HD5970 system would out perform a GTX 480 SLI system.
I guess you also haven't heard of Metro 2033, STALKER: Call of Prypiat, or Shattered Horizon, all of which are very demanding games.
And there will be more demanding games out later this year (Rage, Deus Ex 3, Crysis 2).
I'm calling you a troll.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637642</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31638398</id>
	<title>More performance and analysis at HotHardware</title>
	<author>MojoKid</author>
	<datestamp>1269692760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>The coverage at HotHardware shows the a closer race between the NVIDIA beast and its competition:
<a href="http://hothardware.com/Articles/NVIDIA-GeForce-GTX-480-GF100-Has-Landed/" title="hothardware.com">http://hothardware.com/Articles/NVIDIA-GeForce-GTX-480-GF100-Has-Landed/</a> [hothardware.com]</htmltext>
<tokenext>The coverage at HotHardware shows the a closer race between the NVIDIA beast and its competition : http : //hothardware.com/Articles/NVIDIA-GeForce-GTX-480-GF100-Has-Landed/ [ hothardware.com ]</tokentext>
<sentencetext>The coverage at HotHardware shows the a closer race between the NVIDIA beast and its competition:
http://hothardware.com/Articles/NVIDIA-GeForce-GTX-480-GF100-Has-Landed/ [hothardware.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637088</id>
	<title>Re:Crap Hardware vs. Crap Drivers? Is that it atm?</title>
	<author>Yaa 101</author>
	<datestamp>1269627480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Sorry but the ATI prop drivers are unstable, thus unusable for most part.</p></htmltext>
<tokenext>Sorry but the ATI prop drivers are unstable , thus unusable for most part .</tokentext>
<sentencetext>Sorry but the ATI prop drivers are unstable, thus unusable for most part.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636970</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31639194</id>
	<title>Re:Crippled double precision, bleh.</title>
	<author>evilbessie</author>
	<datestamp>1269702960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I'm guessing that they want to sell you the $2000 top of the line Quadro FX card to do your number crunching, or the Tesla.</htmltext>
<tokenext>I 'm guessing that they want to sell you the $ 2000 top of the line Quadro FX card to do your number crunching , or the Tesla .</tokentext>
<sentencetext>I'm guessing that they want to sell you the $2000 top of the line Quadro FX card to do your number crunching, or the Tesla.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637322</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31638374</id>
	<title>Re:$1000 for graphics</title>
	<author>Kjella</author>
	<datestamp>1269692460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Well, you could simply declare that multiple monitors is for losers and get a QuadHD (3840x2160) LCD instead, like say <a href="http://www.westinghousedigital.com/details.aspx?itemnum=240" title="westinghousedigital.com">this one</a> [westinghousedigital.com]. It's only supposed to set you back <a href="http://hd.engadget.com/2008/06/21/westinghouses-56-inch-d56qx1-quad-hd-display-on-sale-for-50-00/" title="engadget.com">50,000$</a> [engadget.com] or so. A 2160 cinema projector can easily set you back a few hundred thousands if that's not enough. There's always options if you have enough money...</p></htmltext>
<tokenext>Well , you could simply declare that multiple monitors is for losers and get a QuadHD ( 3840x2160 ) LCD instead , like say this one [ westinghousedigital.com ] .
It 's only supposed to set you back 50,000 $ [ engadget.com ] or so .
A 2160 cinema projector can easily set you back a few hundred thousands if that 's not enough .
There 's always options if you have enough money.. .</tokentext>
<sentencetext>Well, you could simply declare that multiple monitors is for losers and get a QuadHD (3840x2160) LCD instead, like say this one [westinghousedigital.com].
It's only supposed to set you back 50,000$ [engadget.com] or so.
A 2160 cinema projector can easily set you back a few hundred thousands if that's not enough.
There's always options if you have enough money...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636556</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637978</id>
	<title>Re:This is why we need the on-live service to succ</title>
	<author>sa1lnr</author>
	<datestamp>1269686940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>And don't forget the price of the games.</p><p>Here in the UK PC games are a standard &pound;30/35 Sterling and have been for years. A lot of console games that I see in the stores are up to and above 50\% more expensive.</p></htmltext>
<tokenext>And do n't forget the price of the games.Here in the UK PC games are a standard   30/35 Sterling and have been for years .
A lot of console games that I see in the stores are up to and above 50 \ % more expensive .</tokentext>
<sentencetext>And don't forget the price of the games.Here in the UK PC games are a standard £30/35 Sterling and have been for years.
A lot of console games that I see in the stores are up to and above 50\% more expensive.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637338</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31638206</id>
	<title>Re:Crap Hardware vs. Crap Drivers? Is that it atm?</title>
	<author>Fred\_A</author>
	<datestamp>1269690000000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>So I can choose between nice 20W idle with ATI, but shit windows and goddamn awful linux drivers with only outdated X.org / kernel support for the cards.</p><p>Or this power hungry overpriced heater (yay, summer is coming), which at least has decent drivers.</p></div><p>I think I read somewhere (I'd have to look it up) that both ATI and nVidia make other models.<br>Maybe you could find one that's more to your liking among those ?</p><p>OTOH, with summer comes the season of open case barbecues, so nVidia has at least something going for it !<br>(is the GF100 dishwasher safe ?)</p></div>
	</htmltext>
<tokenext>So I can choose between nice 20W idle with ATI , but shit windows and goddamn awful linux drivers with only outdated X.org / kernel support for the cards.Or this power hungry overpriced heater ( yay , summer is coming ) , which at least has decent drivers.I think I read somewhere ( I 'd have to look it up ) that both ATI and nVidia make other models.Maybe you could find one that 's more to your liking among those ? OTOH , with summer comes the season of open case barbecues , so nVidia has at least something going for it !
( is the GF100 dishwasher safe ?
)</tokentext>
<sentencetext>So I can choose between nice 20W idle with ATI, but shit windows and goddamn awful linux drivers with only outdated X.org / kernel support for the cards.Or this power hungry overpriced heater (yay, summer is coming), which at least has decent drivers.I think I read somewhere (I'd have to look it up) that both ATI and nVidia make other models.Maybe you could find one that's more to your liking among those ?OTOH, with summer comes the season of open case barbecues, so nVidia has at least something going for it !
(is the GF100 dishwasher safe ?
)
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636610</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637142</id>
	<title>To summarize Fermi paper launch</title>
	<author>Anonymous</author>
	<datestamp>1269628140000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>To summarize Fermi paper launch:<br>Fermi is a damn hot and noisy beast<br>Fermi is more expensive and only slightly faster than the respective ATI Radeon cards, thus DAAMIT will not cut prices for Radeons in the nearest future<br>Punters will have to wait at least for two weeks for general availability<br>Fermi desperately needs a reboot/refresh/whatever to attract masses<br>It seems like NVIDIA has fallen into the same trap as with GeForce 5XXX generation launch.<br><a href="http://www.chinamobilephones.org/" title="chinamobilephones.org" rel="nofollow">China Mobile Phones</a> [chinamobilephones.org]</p></htmltext>
<tokenext>To summarize Fermi paper launch : Fermi is a damn hot and noisy beastFermi is more expensive and only slightly faster than the respective ATI Radeon cards , thus DAAMIT will not cut prices for Radeons in the nearest futurePunters will have to wait at least for two weeks for general availabilityFermi desperately needs a reboot/refresh/whatever to attract massesIt seems like NVIDIA has fallen into the same trap as with GeForce 5XXX generation launch.China Mobile Phones [ chinamobilephones.org ]</tokentext>
<sentencetext>To summarize Fermi paper launch:Fermi is a damn hot and noisy beastFermi is more expensive and only slightly faster than the respective ATI Radeon cards, thus DAAMIT will not cut prices for Radeons in the nearest futurePunters will have to wait at least for two weeks for general availabilityFermi desperately needs a reboot/refresh/whatever to attract massesIt seems like NVIDIA has fallen into the same trap as with GeForce 5XXX generation launch.China Mobile Phones [chinamobilephones.org]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31642848</id>
	<title>Re:good card for playing with GPGPU?</title>
	<author>valenti</author>
	<datestamp>1269686160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If you are interested in playing, any card 8xxx series or above works. (Nvidia) For instance, this Macbook has 9400M and I was able to download the CUDA stuff and run the sample programs with no trouble. RE the double precision and # of shaders (or performance), it really depends on what your code uses and how fast you need/want it. Easiest to get something running and then see where the bottleneck is, and how much it costs to fix.</p><p>Mostly my GPU usage is for folding (folding.stanford.edu), I like the new boards because they run cooler. Just ordered a Gt240 for about $65 after rebate. An advantage is that the 240 doesn't need the extra power cable. I think it will fold proteins as fast as about ten Core2 2Ghz CPUs. The best card I have so far is a 250, it equals about 20 of those Core2's. Last summer I picked up some 9600gso cards for about $35, those have similar performance to the 240, but require the extra power plug.</p><p>I would like to do something like atlasfolding.com , but with much less $$. It looks like this new 480 is about 4X's the 295 performance for ~same cost. Sounds good to me.</p><p>PS - if you get some good GPGPU code running and need more performance, try to hook up with a<nobr> <wbr></nobr>.edu HPCC. Most of them are getting into CUDA and might have spare cycles. You might have to switch to linux.</p></htmltext>
<tokenext>If you are interested in playing , any card 8xxx series or above works .
( Nvidia ) For instance , this Macbook has 9400M and I was able to download the CUDA stuff and run the sample programs with no trouble .
RE the double precision and # of shaders ( or performance ) , it really depends on what your code uses and how fast you need/want it .
Easiest to get something running and then see where the bottleneck is , and how much it costs to fix.Mostly my GPU usage is for folding ( folding.stanford.edu ) , I like the new boards because they run cooler .
Just ordered a Gt240 for about $ 65 after rebate .
An advantage is that the 240 does n't need the extra power cable .
I think it will fold proteins as fast as about ten Core2 2Ghz CPUs .
The best card I have so far is a 250 , it equals about 20 of those Core2 's .
Last summer I picked up some 9600gso cards for about $ 35 , those have similar performance to the 240 , but require the extra power plug.I would like to do something like atlasfolding.com , but with much less $ $ .
It looks like this new 480 is about 4X 's the 295 performance for ~ same cost .
Sounds good to me.PS - if you get some good GPGPU code running and need more performance , try to hook up with a .edu HPCC .
Most of them are getting into CUDA and might have spare cycles .
You might have to switch to linux .</tokentext>
<sentencetext>If you are interested in playing, any card 8xxx series or above works.
(Nvidia) For instance, this Macbook has 9400M and I was able to download the CUDA stuff and run the sample programs with no trouble.
RE the double precision and # of shaders (or performance), it really depends on what your code uses and how fast you need/want it.
Easiest to get something running and then see where the bottleneck is, and how much it costs to fix.Mostly my GPU usage is for folding (folding.stanford.edu), I like the new boards because they run cooler.
Just ordered a Gt240 for about $65 after rebate.
An advantage is that the 240 doesn't need the extra power cable.
I think it will fold proteins as fast as about ten Core2 2Ghz CPUs.
The best card I have so far is a 250, it equals about 20 of those Core2's.
Last summer I picked up some 9600gso cards for about $35, those have similar performance to the 240, but require the extra power plug.I would like to do something like atlasfolding.com , but with much less $$.
It looks like this new 480 is about 4X's the 295 performance for ~same cost.
Sounds good to me.PS - if you get some good GPGPU code running and need more performance, try to hook up with a .edu HPCC.
Most of them are getting into CUDA and might have spare cycles.
You might have to switch to linux.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637940</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31642596</id>
	<title>Too late...</title>
	<author>Chris Mattern</author>
	<datestamp>1269684240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Just bought a new gaming rig and went with ATI because nVidia didn't have a DX11 card.  Probably would've gone with ATI anyways, but not supporting DX11 just put nVidia right out of the running.</p></htmltext>
<tokenext>Just bought a new gaming rig and went with ATI because nVidia did n't have a DX11 card .
Probably would 've gone with ATI anyways , but not supporting DX11 just put nVidia right out of the running .</tokentext>
<sentencetext>Just bought a new gaming rig and went with ATI because nVidia didn't have a DX11 card.
Probably would've gone with ATI anyways, but not supporting DX11 just put nVidia right out of the running.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636842</id>
	<title>Re:Fermi needs a refresh or v2</title>
	<author>lightrush</author>
	<datestamp>1269625200000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>You mean 1 benchmark and it is the same one at which HD5870 is about 5-10\% slower than GTX480?</htmltext>
<tokenext>You mean 1 benchmark and it is the same one at which HD5870 is about 5-10 \ % slower than GTX480 ?</tokentext>
<sentencetext>You mean 1 benchmark and it is the same one at which HD5870 is about 5-10\% slower than GTX480?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636782</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636782</id>
	<title>Re:Fermi needs a refresh or v2</title>
	<author>Anonymous</author>
	<datestamp>1269624660000</datestamp>
	<modclass>Troll</modclass>
	<modscore>0</modscore>
	<htmltext>Point 2 is inaccurate.  The 480 is cheaper than the 5970 (by almost $200) and the 480 beats the 5970 in multiple benchmarks.</htmltext>
<tokenext>Point 2 is inaccurate .
The 480 is cheaper than the 5970 ( by almost $ 200 ) and the 480 beats the 5970 in multiple benchmarks .</tokentext>
<sentencetext>Point 2 is inaccurate.
The 480 is cheaper than the 5970 (by almost $200) and the 480 beats the 5970 in multiple benchmarks.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636580</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31641278</id>
	<title>Re:good card for playing with GPGPU?</title>
	<author>Anonymous</author>
	<datestamp>1269717060000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Well the GPGPU landscape is still forming in significant ways, and the very architectures of the software/hardware are still very much evolving.  What this means is that in most cases you can certainly write code that "in effect" accomplishes the same overall computational goals with an older card like many of the ones in the GT2xx series, but you'll be restricted architecturally in the ways in which you have to design the algorithms themselves since some constructs / capabilities that are present on the newer cards are either absent or are so inefficient that they're useless on the older cards.  NVIDIA calls their architectural divisions "compute capability" classes and they revision them like 1.0, 1.1,<nobr> <wbr></nobr>... 2.0, etc. depending on<br>the generation of card.  The newer cards have things like unified memory spaces to better support pointer use, the ability to run multiple different compute kernels at the same time, better more efficient ability to share data across compute units / threads / kernels on the GPU, better double precision, and so on.<br>You should read the following documents to get an idea of the capabilities Fermi cards have that GT2xx cards do not.<br><a href="http://developer.download.nvidia.com/compute/cuda/3\_0/toolkit/docs/NVIDIA\_FermiTuningGuide.pdf" title="nvidia.com" rel="nofollow">http://developer.download.nvidia.com/compute/cuda/3\_0/toolkit/docs/NVIDIA\_FermiTuningGuide.pdf</a> [nvidia.com]<br><a href="http://developer.download.nvidia.com/compute/cuda/3\_0/docs/NVIDIA\_FermiCompatibilityGuide.pdf" title="nvidia.com" rel="nofollow">http://developer.download.nvidia.com/compute/cuda/3\_0/docs/NVIDIA\_FermiCompatibilityGuide.pdf</a> [nvidia.com]<br><a href="http://developer.download.nvidia.com/compute/cuda/3\_0/toolkit/docs/NVIDIA\_CUDA\_ProgrammingGuide.pdf" title="nvidia.com" rel="nofollow">http://developer.download.nvidia.com/compute/cuda/3\_0/toolkit/docs/NVIDIA\_CUDA\_ProgrammingGuide.pdf</a> [nvidia.com]</p><p>In many cases the advantages are compelling, and can even mean the difference between algorithms that will run well on a GPU and ones that just won't, i.e. better caches, data sharing, kernel parallelism, better debugging, etc.</p><p>So if you want to have the most modern architecture available to program on, I'd say wait 3 months for a lower cost fermi card and use that, but realize that you may be forced to develop multiple versions of your algorithms if you want the optimum fermi performance / capabilities for one version, but you will need a differently coded version if you want to be able to run a similar program on an older GT2xx GPU for instance.</p><p>If you're content with very basic GPGPU capabilities and learning to code for them, you could get started writing OpenCL code that can run on your main CPU even without an advanced GPU, or you might be able to use the device emulator capabilities for code testing / debugging that NVIDIA may still support for limited experimentation.</p><p>If you really want to use the architecture to its fullest potential, though, you'll probably end up writing CUDA code and not OpenCL since the former gives you a lot "closer to the metal" capabilities than OpenCL does.</p><p>For the forseeable future AMD's 58xx cards will actually handily outperform NVIDIA's fermis in double precision math heavy kernels due to the intentionally crippled DP performance in the fermis, though AMD has very immature OpenCL support compared to NVIDIA, and CUDA is better than OpenCL in terms of raw programming efficiency capability anyway.</p><p>I'd wait until the mainstream Fermis are out in 2-3 months, then use that, and in the mean while learn OpenCL now based on the CPU runtimes and study up on the Cuda docs so you have some clue about how to use it when you get the card.</p></htmltext>
<tokenext>Well the GPGPU landscape is still forming in significant ways , and the very architectures of the software/hardware are still very much evolving .
What this means is that in most cases you can certainly write code that " in effect " accomplishes the same overall computational goals with an older card like many of the ones in the GT2xx series , but you 'll be restricted architecturally in the ways in which you have to design the algorithms themselves since some constructs / capabilities that are present on the newer cards are either absent or are so inefficient that they 're useless on the older cards .
NVIDIA calls their architectural divisions " compute capability " classes and they revision them like 1.0 , 1.1 , ... 2.0 , etc .
depending onthe generation of card .
The newer cards have things like unified memory spaces to better support pointer use , the ability to run multiple different compute kernels at the same time , better more efficient ability to share data across compute units / threads / kernels on the GPU , better double precision , and so on.You should read the following documents to get an idea of the capabilities Fermi cards have that GT2xx cards do not.http : //developer.download.nvidia.com/compute/cuda/3 \ _0/toolkit/docs/NVIDIA \ _FermiTuningGuide.pdf [ nvidia.com ] http : //developer.download.nvidia.com/compute/cuda/3 \ _0/docs/NVIDIA \ _FermiCompatibilityGuide.pdf [ nvidia.com ] http : //developer.download.nvidia.com/compute/cuda/3 \ _0/toolkit/docs/NVIDIA \ _CUDA \ _ProgrammingGuide.pdf [ nvidia.com ] In many cases the advantages are compelling , and can even mean the difference between algorithms that will run well on a GPU and ones that just wo n't , i.e .
better caches , data sharing , kernel parallelism , better debugging , etc.So if you want to have the most modern architecture available to program on , I 'd say wait 3 months for a lower cost fermi card and use that , but realize that you may be forced to develop multiple versions of your algorithms if you want the optimum fermi performance / capabilities for one version , but you will need a differently coded version if you want to be able to run a similar program on an older GT2xx GPU for instance.If you 're content with very basic GPGPU capabilities and learning to code for them , you could get started writing OpenCL code that can run on your main CPU even without an advanced GPU , or you might be able to use the device emulator capabilities for code testing / debugging that NVIDIA may still support for limited experimentation.If you really want to use the architecture to its fullest potential , though , you 'll probably end up writing CUDA code and not OpenCL since the former gives you a lot " closer to the metal " capabilities than OpenCL does.For the forseeable future AMD 's 58xx cards will actually handily outperform NVIDIA 's fermis in double precision math heavy kernels due to the intentionally crippled DP performance in the fermis , though AMD has very immature OpenCL support compared to NVIDIA , and CUDA is better than OpenCL in terms of raw programming efficiency capability anyway.I 'd wait until the mainstream Fermis are out in 2-3 months , then use that , and in the mean while learn OpenCL now based on the CPU runtimes and study up on the Cuda docs so you have some clue about how to use it when you get the card .</tokentext>
<sentencetext>Well the GPGPU landscape is still forming in significant ways, and the very architectures of the software/hardware are still very much evolving.
What this means is that in most cases you can certainly write code that "in effect" accomplishes the same overall computational goals with an older card like many of the ones in the GT2xx series, but you'll be restricted architecturally in the ways in which you have to design the algorithms themselves since some constructs / capabilities that are present on the newer cards are either absent or are so inefficient that they're useless on the older cards.
NVIDIA calls their architectural divisions "compute capability" classes and they revision them like 1.0, 1.1, ... 2.0, etc.
depending onthe generation of card.
The newer cards have things like unified memory spaces to better support pointer use, the ability to run multiple different compute kernels at the same time, better more efficient ability to share data across compute units / threads / kernels on the GPU, better double precision, and so on.You should read the following documents to get an idea of the capabilities Fermi cards have that GT2xx cards do not.http://developer.download.nvidia.com/compute/cuda/3\_0/toolkit/docs/NVIDIA\_FermiTuningGuide.pdf [nvidia.com]http://developer.download.nvidia.com/compute/cuda/3\_0/docs/NVIDIA\_FermiCompatibilityGuide.pdf [nvidia.com]http://developer.download.nvidia.com/compute/cuda/3\_0/toolkit/docs/NVIDIA\_CUDA\_ProgrammingGuide.pdf [nvidia.com]In many cases the advantages are compelling, and can even mean the difference between algorithms that will run well on a GPU and ones that just won't, i.e.
better caches, data sharing, kernel parallelism, better debugging, etc.So if you want to have the most modern architecture available to program on, I'd say wait 3 months for a lower cost fermi card and use that, but realize that you may be forced to develop multiple versions of your algorithms if you want the optimum fermi performance / capabilities for one version, but you will need a differently coded version if you want to be able to run a similar program on an older GT2xx GPU for instance.If you're content with very basic GPGPU capabilities and learning to code for them, you could get started writing OpenCL code that can run on your main CPU even without an advanced GPU, or you might be able to use the device emulator capabilities for code testing / debugging that NVIDIA may still support for limited experimentation.If you really want to use the architecture to its fullest potential, though, you'll probably end up writing CUDA code and not OpenCL since the former gives you a lot "closer to the metal" capabilities than OpenCL does.For the forseeable future AMD's 58xx cards will actually handily outperform NVIDIA's fermis in double precision math heavy kernels due to the intentionally crippled DP performance in the fermis, though AMD has very immature OpenCL support compared to NVIDIA, and CUDA is better than OpenCL in terms of raw programming efficiency capability anyway.I'd wait until the mainstream Fermis are out in 2-3 months, then use that, and in the mean while learn OpenCL now based on the CPU runtimes and study up on the Cuda docs so you have some clue about how to use it when you get the card.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637940</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31639464</id>
	<title>90 degrees C, at Idle!!</title>
	<author>guidryp</author>
	<datestamp>1269705120000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p><a href="http://www.legitreviews.com/article/1258/15/" title="legitreviews.com">http://www.legitreviews.com/article/1258/15/</a> [legitreviews.com] </p><p><div class="quote"><p>I discovered that the GeForce GTX 480 video card was sitting at 90C in an idle state since I had two monitors installed on my system. I talked with some of the NVIDIA engineers about this 'issue' I was having and found that it wasn't really an issue per say as they do it to prevent screen flickering.  This is what NVIDIA said in response to our questions:</p><p>"We are currently keeping memory clock high to avoid some screen flicker when changing power states, so for now we are running higher idle power in dual-screen setups. Not sure when/if this will be changed. Also note we're trading off temps for acoustic quality at idle. We could ratchet down the temp, but need to turn up the fan to do so. Our fan control is set to not start increasing fan until we're up near the 80's, so the higher temp is actually by design to keep the acoustics lower." - NVIDIA PR</p><p>Regardless what the reasons are behind this, running a two monitor setup will cause your system to literally bake.</p> </div><p>Yikes!</p><p>I already wasn't impressed, but after reading this it looks more like a fiasco, than just a mild disappointment.</p></div>
	</htmltext>
<tokenext>http : //www.legitreviews.com/article/1258/15/ [ legitreviews.com ] I discovered that the GeForce GTX 480 video card was sitting at 90C in an idle state since I had two monitors installed on my system .
I talked with some of the NVIDIA engineers about this 'issue ' I was having and found that it was n't really an issue per say as they do it to prevent screen flickering .
This is what NVIDIA said in response to our questions : " We are currently keeping memory clock high to avoid some screen flicker when changing power states , so for now we are running higher idle power in dual-screen setups .
Not sure when/if this will be changed .
Also note we 're trading off temps for acoustic quality at idle .
We could ratchet down the temp , but need to turn up the fan to do so .
Our fan control is set to not start increasing fan until we 're up near the 80 's , so the higher temp is actually by design to keep the acoustics lower .
" - NVIDIA PRRegardless what the reasons are behind this , running a two monitor setup will cause your system to literally bake .
Yikes ! I already was n't impressed , but after reading this it looks more like a fiasco , than just a mild disappointment .</tokentext>
<sentencetext>http://www.legitreviews.com/article/1258/15/ [legitreviews.com] I discovered that the GeForce GTX 480 video card was sitting at 90C in an idle state since I had two monitors installed on my system.
I talked with some of the NVIDIA engineers about this 'issue' I was having and found that it wasn't really an issue per say as they do it to prevent screen flickering.
This is what NVIDIA said in response to our questions:"We are currently keeping memory clock high to avoid some screen flicker when changing power states, so for now we are running higher idle power in dual-screen setups.
Not sure when/if this will be changed.
Also note we're trading off temps for acoustic quality at idle.
We could ratchet down the temp, but need to turn up the fan to do so.
Our fan control is set to not start increasing fan until we're up near the 80's, so the higher temp is actually by design to keep the acoustics lower.
" - NVIDIA PRRegardless what the reasons are behind this, running a two monitor setup will cause your system to literally bake.
Yikes!I already wasn't impressed, but after reading this it looks more like a fiasco, than just a mild disappointment.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31639062</id>
	<title>Re:Fermi needs a refresh or v2</title>
	<author>Kjella</author>
	<datestamp>1269701040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Well, honestly I don't quite get this card. It's either a much cheaper Quatro or it's an overpriced, hot gaming card. It's six months after the Radeon 5xxx series launched, I wouldn't be very surprised if between this paper launch and actual availability AMD has binned up and announces the HD5890 to go head-to-head with nVidia for the title of fastest single gaming card again. Also AMD has managed to roll out a full series top-to-bottom of 40nm cards already, while I think nVidia will take a lot longer to trickle down.</p><p>Yes, there's those that need CUDA but I don't think the intersection between gamers and CUDA users is that big - and if gamers don't buy this card and the CUDA users buy this instead of the Quatros then it's a lose-lose proposition for nVidia. But I guess I see the outlines of the "new nVidia", they're just preparing to leave the "regular" GPU market sooner than I expected. With Intel including graphics on Atoms/Core i3/Core i5 and AMD heading for AMD Fusion then nVidia is being shut out of the market for integrated graphics. But there'll still be a decent market for discrete chips between that and the CUDA-focused cards.</p><p>This card looks like a misfit, like if you've put a truck engine in a sports car because they're both big, powerful engines. If AMD just focuses on the "average" discrete market and does that much better they got plenty to live off even if nVidia takes the CUDA market. I'm not so sure nVidia can sustain themselves just on being a niche high-end company. Maybe they can but most companies that have tried have been steamrolled by the huge volume and investment happening mainstream. This card to me at least looks like it's serving customers with one hand and packing its bags with the other.</p></htmltext>
<tokenext>Well , honestly I do n't quite get this card .
It 's either a much cheaper Quatro or it 's an overpriced , hot gaming card .
It 's six months after the Radeon 5xxx series launched , I would n't be very surprised if between this paper launch and actual availability AMD has binned up and announces the HD5890 to go head-to-head with nVidia for the title of fastest single gaming card again .
Also AMD has managed to roll out a full series top-to-bottom of 40nm cards already , while I think nVidia will take a lot longer to trickle down.Yes , there 's those that need CUDA but I do n't think the intersection between gamers and CUDA users is that big - and if gamers do n't buy this card and the CUDA users buy this instead of the Quatros then it 's a lose-lose proposition for nVidia .
But I guess I see the outlines of the " new nVidia " , they 're just preparing to leave the " regular " GPU market sooner than I expected .
With Intel including graphics on Atoms/Core i3/Core i5 and AMD heading for AMD Fusion then nVidia is being shut out of the market for integrated graphics .
But there 'll still be a decent market for discrete chips between that and the CUDA-focused cards.This card looks like a misfit , like if you 've put a truck engine in a sports car because they 're both big , powerful engines .
If AMD just focuses on the " average " discrete market and does that much better they got plenty to live off even if nVidia takes the CUDA market .
I 'm not so sure nVidia can sustain themselves just on being a niche high-end company .
Maybe they can but most companies that have tried have been steamrolled by the huge volume and investment happening mainstream .
This card to me at least looks like it 's serving customers with one hand and packing its bags with the other .</tokentext>
<sentencetext>Well, honestly I don't quite get this card.
It's either a much cheaper Quatro or it's an overpriced, hot gaming card.
It's six months after the Radeon 5xxx series launched, I wouldn't be very surprised if between this paper launch and actual availability AMD has binned up and announces the HD5890 to go head-to-head with nVidia for the title of fastest single gaming card again.
Also AMD has managed to roll out a full series top-to-bottom of 40nm cards already, while I think nVidia will take a lot longer to trickle down.Yes, there's those that need CUDA but I don't think the intersection between gamers and CUDA users is that big - and if gamers don't buy this card and the CUDA users buy this instead of the Quatros then it's a lose-lose proposition for nVidia.
But I guess I see the outlines of the "new nVidia", they're just preparing to leave the "regular" GPU market sooner than I expected.
With Intel including graphics on Atoms/Core i3/Core i5 and AMD heading for AMD Fusion then nVidia is being shut out of the market for integrated graphics.
But there'll still be a decent market for discrete chips between that and the CUDA-focused cards.This card looks like a misfit, like if you've put a truck engine in a sports car because they're both big, powerful engines.
If AMD just focuses on the "average" discrete market and does that much better they got plenty to live off even if nVidia takes the CUDA market.
I'm not so sure nVidia can sustain themselves just on being a niche high-end company.
Maybe they can but most companies that have tried have been steamrolled by the huge volume and investment happening mainstream.
This card to me at least looks like it's serving customers with one hand and packing its bags with the other.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637642</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637642</id>
	<title>Re:Fermi needs a refresh or v2</title>
	<author>Anonymous</author>
	<datestamp>1269680640000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Maybe.  But that assumes that your GPU is just being used to render DX or OpenGL games.</p><p>I think Nvidia made a very wise business decision with Fermi.   Right now there is NO DEMAND for a video card on Fermi's level.   All of the popular games run at full quality in full HD with AA.   There is no "Crysis" which nobody can run at a decent framerate.  We've sort of plateaued at "Good enough" since most games are cross developed for consoles (which are running aging video cards) and PC.   Both AMD and Nvidia have released gaming cards that are overkill.  So Nvidia has decided to take a different tact.  They've managed to release a gaming card that is competitive with the very best video card for gaming and also redesigned their cores to be fast GPGPUs.</p><p>In the AnandTech review the GTX400 is 2x-10x faster than the GTX 285 or Radeon 5870.</p><p>That might not do much for Modern Warfare 2 but Modern Warfare 2 already runs great.  It will offer huge performance improvements in things like video encoding, photoshop or any other CUDA ready application.</p><p>As OpenCL gets used more in games for things like hair and cloth simulation or ray-traced reflections Nvidia will have an architecture ready to deliver that as well.  At some point AMD is going to need to go through a large re-architecture as well.  But the longer they wait the more likely they'll be trying to push out a competing product while the competition is fierce.  If there is a time to deliver an average product and suffer huge delays it's during an economic turn down and a period where there is little reason to upgrade.</p></htmltext>
<tokenext>Maybe .
But that assumes that your GPU is just being used to render DX or OpenGL games.I think Nvidia made a very wise business decision with Fermi .
Right now there is NO DEMAND for a video card on Fermi 's level .
All of the popular games run at full quality in full HD with AA .
There is no " Crysis " which nobody can run at a decent framerate .
We 've sort of plateaued at " Good enough " since most games are cross developed for consoles ( which are running aging video cards ) and PC .
Both AMD and Nvidia have released gaming cards that are overkill .
So Nvidia has decided to take a different tact .
They 've managed to release a gaming card that is competitive with the very best video card for gaming and also redesigned their cores to be fast GPGPUs.In the AnandTech review the GTX400 is 2x-10x faster than the GTX 285 or Radeon 5870.That might not do much for Modern Warfare 2 but Modern Warfare 2 already runs great .
It will offer huge performance improvements in things like video encoding , photoshop or any other CUDA ready application.As OpenCL gets used more in games for things like hair and cloth simulation or ray-traced reflections Nvidia will have an architecture ready to deliver that as well .
At some point AMD is going to need to go through a large re-architecture as well .
But the longer they wait the more likely they 'll be trying to push out a competing product while the competition is fierce .
If there is a time to deliver an average product and suffer huge delays it 's during an economic turn down and a period where there is little reason to upgrade .</tokentext>
<sentencetext>Maybe.
But that assumes that your GPU is just being used to render DX or OpenGL games.I think Nvidia made a very wise business decision with Fermi.
Right now there is NO DEMAND for a video card on Fermi's level.
All of the popular games run at full quality in full HD with AA.
There is no "Crysis" which nobody can run at a decent framerate.
We've sort of plateaued at "Good enough" since most games are cross developed for consoles (which are running aging video cards) and PC.
Both AMD and Nvidia have released gaming cards that are overkill.
So Nvidia has decided to take a different tact.
They've managed to release a gaming card that is competitive with the very best video card for gaming and also redesigned their cores to be fast GPGPUs.In the AnandTech review the GTX400 is 2x-10x faster than the GTX 285 or Radeon 5870.That might not do much for Modern Warfare 2 but Modern Warfare 2 already runs great.
It will offer huge performance improvements in things like video encoding, photoshop or any other CUDA ready application.As OpenCL gets used more in games for things like hair and cloth simulation or ray-traced reflections Nvidia will have an architecture ready to deliver that as well.
At some point AMD is going to need to go through a large re-architecture as well.
But the longer they wait the more likely they'll be trying to push out a competing product while the competition is fierce.
If there is a time to deliver an average product and suffer huge delays it's during an economic turn down and a period where there is little reason to upgrade.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636580</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636616</id>
	<title>So...</title>
	<author>Anonymous</author>
	<datestamp>1269623220000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext><blockquote><div><p>Most unique, perhaps, is that the surface of the card is actually part of the heatsink, above the fin array. Normally, this would be a part of the card you could grab onto when pulling it out of a system. But when I burnt my hand on it, I thought a temperature reading would be interesting. Turns out that, during normal game play (running Crysis, not something like FurMark), the exposed metal exceeds 71 degrees C (or about 160 degrees F).</p></div></blockquote><p><nobr> <wbr></nobr>...So, are any third party manufacturers planning on making an easy-bake oven attachment for this thing?  At least have that thing creating some gaming snacks with some of that extra heat.</p><p>Ryan Fenton</p></div>
	</htmltext>
<tokenext>Most unique , perhaps , is that the surface of the card is actually part of the heatsink , above the fin array .
Normally , this would be a part of the card you could grab onto when pulling it out of a system .
But when I burnt my hand on it , I thought a temperature reading would be interesting .
Turns out that , during normal game play ( running Crysis , not something like FurMark ) , the exposed metal exceeds 71 degrees C ( or about 160 degrees F ) .
...So , are any third party manufacturers planning on making an easy-bake oven attachment for this thing ?
At least have that thing creating some gaming snacks with some of that extra heat.Ryan Fenton</tokentext>
<sentencetext>Most unique, perhaps, is that the surface of the card is actually part of the heatsink, above the fin array.
Normally, this would be a part of the card you could grab onto when pulling it out of a system.
But when I burnt my hand on it, I thought a temperature reading would be interesting.
Turns out that, during normal game play (running Crysis, not something like FurMark), the exposed metal exceeds 71 degrees C (or about 160 degrees F).
...So, are any third party manufacturers planning on making an easy-bake oven attachment for this thing?
At least have that thing creating some gaming snacks with some of that extra heat.Ryan Fenton
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637804</id>
	<title>Re:So...</title>
	<author>fragMasterFlash</author>
	<datestamp>1269683340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>If this thing can be rigged to cook bacon then even I might buy one.</htmltext>
<tokenext>If this thing can be rigged to cook bacon then even I might buy one .</tokentext>
<sentencetext>If this thing can be rigged to cook bacon then even I might buy one.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636616</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637286</id>
	<title>Re:So...</title>
	<author>DigiShaman</author>
	<datestamp>1269630300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Cookies!!! And they'll pop out like a sideways toaster. Shweet</p></htmltext>
<tokenext>Cookies ! ! !
And they 'll pop out like a sideways toaster .
Shweet</tokentext>
<sentencetext>Cookies!!!
And they'll pop out like a sideways toaster.
Shweet</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636616</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31643782</id>
	<title>Re:This is why we need the on-live service to succ</title>
	<author>kf6auf</author>
	<datestamp>1269695940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Wait a second, your argument is that 3 consoles (including accessories) are as expensive as 2 gaming computers and therefore consoles are more expensive than gaming computers?

</p><p>First of all, 3 Toyotas Highlanders are more expensive than 2 BMWs Z4s.  Second, you can have more than one person play a console simultaneously, but you have to take turns on the gaming rig (or buy 2) -- like fitting more people in the Highlander.  Finally, old consoles are fun (and cheap); old gaming computers suck at gaming.</p></htmltext>
<tokenext>Wait a second , your argument is that 3 consoles ( including accessories ) are as expensive as 2 gaming computers and therefore consoles are more expensive than gaming computers ?
First of all , 3 Toyotas Highlanders are more expensive than 2 BMWs Z4s .
Second , you can have more than one person play a console simultaneously , but you have to take turns on the gaming rig ( or buy 2 ) -- like fitting more people in the Highlander .
Finally , old consoles are fun ( and cheap ) ; old gaming computers suck at gaming .</tokentext>
<sentencetext>Wait a second, your argument is that 3 consoles (including accessories) are as expensive as 2 gaming computers and therefore consoles are more expensive than gaming computers?
First of all, 3 Toyotas Highlanders are more expensive than 2 BMWs Z4s.
Second, you can have more than one person play a console simultaneously, but you have to take turns on the gaming rig (or buy 2) -- like fitting more people in the Highlander.
Finally, old consoles are fun (and cheap); old gaming computers suck at gaming.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637338</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31658736</id>
	<title>Re:I want</title>
	<author>kalirion</author>
	<datestamp>1269884220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Maybe they'll call it GTS 450 and sell it for $180?</p></htmltext>
<tokenext>Maybe they 'll call it GTS 450 and sell it for $ 180 ?</tokentext>
<sentencetext>Maybe they'll call it GTS 450 and sell it for $180?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637214</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636888</id>
	<title>Re:$1000 for graphics</title>
	<author>Z34107</author>
	<datestamp>1269625560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p> <i>Come on - is that all? There HAS to be a way I can spend 5 times that to play a video game.</i> </p><p>TFA suggested purchasing two of these $500 cards, three $400 120Hz monitors, and a $200 NVIDIA stereoscopic vision kit.  That'll let you game in 3D across three 1080p monitors.</p><p>So, you can spend $1400 in accessories to match your $1000 cards.  And then, you know, buy the rest of the computer.  Not quite five times more, but I'm still salivating over getting my hands on such a setup some day...</p></htmltext>
<tokenext>Come on - is that all ?
There HAS to be a way I can spend 5 times that to play a video game .
TFA suggested purchasing two of these $ 500 cards , three $ 400 120Hz monitors , and a $ 200 NVIDIA stereoscopic vision kit .
That 'll let you game in 3D across three 1080p monitors.So , you can spend $ 1400 in accessories to match your $ 1000 cards .
And then , you know , buy the rest of the computer .
Not quite five times more , but I 'm still salivating over getting my hands on such a setup some day.. .</tokentext>
<sentencetext> Come on - is that all?
There HAS to be a way I can spend 5 times that to play a video game.
TFA suggested purchasing two of these $500 cards, three $400 120Hz monitors, and a $200 NVIDIA stereoscopic vision kit.
That'll let you game in 3D across three 1080p monitors.So, you can spend $1400 in accessories to match your $1000 cards.
And then, you know, buy the rest of the computer.
Not quite five times more, but I'm still salivating over getting my hands on such a setup some day...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636556</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637276</id>
	<title>Re:This is why we need the on-live service to succ</title>
	<author>Namarrgon</author>
	<datestamp>1269630060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Well what on earth are you buying the new cards for? Last year's mid-range cards are far cheaper and perfectly adequate for any game around (especially if you run at console-standard 720p). Also, if you last upgraded in 2004 - you'd be needing a new console by now anyways. Not many games released for the PS2 or Xbox lately.</p><p>On-Live will be ok for slower-paced games (latency kills any FPS playing), but you'll need a fairly beefy connection if you want even console-level resolutions, let alone PC-level. Plus, the larger the frame size, the greater the transmit time and the larger the latency.</p></htmltext>
<tokenext>Well what on earth are you buying the new cards for ?
Last year 's mid-range cards are far cheaper and perfectly adequate for any game around ( especially if you run at console-standard 720p ) .
Also , if you last upgraded in 2004 - you 'd be needing a new console by now anyways .
Not many games released for the PS2 or Xbox lately.On-Live will be ok for slower-paced games ( latency kills any FPS playing ) , but you 'll need a fairly beefy connection if you want even console-level resolutions , let alone PC-level .
Plus , the larger the frame size , the greater the transmit time and the larger the latency .</tokentext>
<sentencetext>Well what on earth are you buying the new cards for?
Last year's mid-range cards are far cheaper and perfectly adequate for any game around (especially if you run at console-standard 720p).
Also, if you last upgraded in 2004 - you'd be needing a new console by now anyways.
Not many games released for the PS2 or Xbox lately.On-Live will be ok for slower-paced games (latency kills any FPS playing), but you'll need a fairly beefy connection if you want even console-level resolutions, let alone PC-level.
Plus, the larger the frame size, the greater the transmit time and the larger the latency.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637102</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31643772</id>
	<title>Re:Fermi needs a refresh or v2</title>
	<author>Latinhypercube</author>
	<datestamp>1269695880000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Agreed. Anyone serious about 3d check out mental images (now owned by Nvidia) iRay running on Fermi.</htmltext>
<tokenext>Agreed .
Anyone serious about 3d check out mental images ( now owned by Nvidia ) iRay running on Fermi .</tokentext>
<sentencetext>Agreed.
Anyone serious about 3d check out mental images (now owned by Nvidia) iRay running on Fermi.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637642</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636864</id>
	<title>GTX285</title>
	<author>Anonymous</author>
	<datestamp>1269625380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I got me one of these back in october for $400. (Eh, pricy, but worth every cent) Its a nice 2gb video card.</p></htmltext>
<tokenext>I got me one of these back in october for $ 400 .
( Eh , pricy , but worth every cent ) Its a nice 2gb video card .</tokentext>
<sentencetext>I got me one of these back in october for $400.
(Eh, pricy, but worth every cent) Its a nice 2gb video card.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636918</id>
	<title>New hardware is good</title>
	<author>Sarten-X</author>
	<datestamp>1269625680000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>I love seeing new generations of hardware come out. It means that the perfectly adequate cards from two years ago will be even cheaper.</htmltext>
<tokenext>I love seeing new generations of hardware come out .
It means that the perfectly adequate cards from two years ago will be even cheaper .</tokentext>
<sentencetext>I love seeing new generations of hardware come out.
It means that the perfectly adequate cards from two years ago will be even cheaper.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31641264</id>
	<title>Re:$1000 for graphics</title>
	<author>TheLink</author>
	<datestamp>1269716940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Or buy a suitably big building and then add a whole bunch of projectors and other fancy stuff like what these bunch use:</p><p><a href="http://www.youtube.com/user/TheDarkroomTV#p/u" title="youtube.com">http://www.youtube.com/user/TheDarkroomTV#p/u</a> [youtube.com]</p></htmltext>
<tokenext>Or buy a suitably big building and then add a whole bunch of projectors and other fancy stuff like what these bunch use : http : //www.youtube.com/user/TheDarkroomTV # p/u [ youtube.com ]</tokentext>
<sentencetext>Or buy a suitably big building and then add a whole bunch of projectors and other fancy stuff like what these bunch use:http://www.youtube.com/user/TheDarkroomTV#p/u [youtube.com]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31638374</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31638838</id>
	<title>No source engine benchmarks?</title>
	<author>Hadlock</author>
	<datestamp>1269698940000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>I got halfway through the first paragraph before I started looking for the link to the L4D2 benchmarks, which are a pretty good indicator of how well your computer is going to run L4D2, TF2, and very importantly, Portal2. None detected, even though it's one of their primary tests on all of their video card shootouts. Another failure for the guys at Tom's Hardware.</p></htmltext>
<tokenext>I got halfway through the first paragraph before I started looking for the link to the L4D2 benchmarks , which are a pretty good indicator of how well your computer is going to run L4D2 , TF2 , and very importantly , Portal2 .
None detected , even though it 's one of their primary tests on all of their video card shootouts .
Another failure for the guys at Tom 's Hardware .</tokentext>
<sentencetext>I got halfway through the first paragraph before I started looking for the link to the L4D2 benchmarks, which are a pretty good indicator of how well your computer is going to run L4D2, TF2, and very importantly, Portal2.
None detected, even though it's one of their primary tests on all of their video card shootouts.
Another failure for the guys at Tom's Hardware.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31638532</id>
	<title>Re:Fermi needs a refresh or v2</title>
	<author>Anonymous</author>
	<datestamp>1269694620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>And what about OpenGL?  It's completely useless without OpenGL support.</p></htmltext>
<tokenext>And what about OpenGL ?
It 's completely useless without OpenGL support .</tokentext>
<sentencetext>And what about OpenGL?
It's completely useless without OpenGL support.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636580</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31642584</id>
	<title>Re:Fermi needs a refresh or v2</title>
	<author>Hurricane78</author>
	<datestamp>1269684240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Well you sissies... Let me throw a full-scale realtime global illumination renderer at that thing, and see how well it fares THEN! ^^</p></htmltext>
<tokenext>Well you sissies... Let me throw a full-scale realtime global illumination renderer at that thing , and see how well it fares THEN !
^ ^</tokentext>
<sentencetext>Well you sissies... Let me throw a full-scale realtime global illumination renderer at that thing, and see how well it fares THEN!
^^</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637642</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636524</id>
	<title>Poop</title>
	<author>Anonymous</author>
	<datestamp>1269622500000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>-1</modscore>
	<htmltext><p>POOP FIRST!</p></htmltext>
<tokenext>POOP FIRST !</tokentext>
<sentencetext>POOP FIRST!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636820</id>
	<title>Bleeding edge isn't usually worth it</title>
	<author>NotSoHeavyD3</author>
	<datestamp>1269625080000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext>I mean look at it like this. You can probably get a card for $120-$150 now that will probably run every current game well right now. (Well except for Crisis) So there is no point in buying it for current games. You could get that $500 card hoping that it will run future games well but it never seems to happen that way.(They're slow no matter what old card you have.) Instead you can just buy another $120-$150 card in a few years and that one will run it well. (This way you end up spending less money and actually get better performance.) So my experience is just buy a decent card ($120-$150) and in a few years buy another one and do whatever with the old one. (Sell it, give it to a family member whatever.)</htmltext>
<tokenext>I mean look at it like this .
You can probably get a card for $ 120- $ 150 now that will probably run every current game well right now .
( Well except for Crisis ) So there is no point in buying it for current games .
You could get that $ 500 card hoping that it will run future games well but it never seems to happen that way .
( They 're slow no matter what old card you have .
) Instead you can just buy another $ 120- $ 150 card in a few years and that one will run it well .
( This way you end up spending less money and actually get better performance .
) So my experience is just buy a decent card ( $ 120- $ 150 ) and in a few years buy another one and do whatever with the old one .
( Sell it , give it to a family member whatever .
)</tokentext>
<sentencetext>I mean look at it like this.
You can probably get a card for $120-$150 now that will probably run every current game well right now.
(Well except for Crisis) So there is no point in buying it for current games.
You could get that $500 card hoping that it will run future games well but it never seems to happen that way.
(They're slow no matter what old card you have.
) Instead you can just buy another $120-$150 card in a few years and that one will run it well.
(This way you end up spending less money and actually get better performance.
) So my experience is just buy a decent card ($120-$150) and in a few years buy another one and do whatever with the old one.
(Sell it, give it to a family member whatever.
)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636970</id>
	<title>Re:Crap Hardware vs. Crap Drivers? Is that it atm?</title>
	<author>Skarecrow77</author>
	<datestamp>1269626160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I hear the ATI proprietary drivers are great as long as you don't wanna run anything newer than the 2xxx series...</p><p>And there are of course the nouveau open source drivers, whose 3d acceleration is best paraphrased as "maybe someday. stop asking dammit."</p><p>Yeah, it's pretty much Nvidia or something from someone else circa 2007, take your pick.</p></htmltext>
<tokenext>I hear the ATI proprietary drivers are great as long as you do n't wan na run anything newer than the 2xxx series...And there are of course the nouveau open source drivers , whose 3d acceleration is best paraphrased as " maybe someday .
stop asking dammit .
" Yeah , it 's pretty much Nvidia or something from someone else circa 2007 , take your pick .</tokentext>
<sentencetext>I hear the ATI proprietary drivers are great as long as you don't wanna run anything newer than the 2xxx series...And there are of course the nouveau open source drivers, whose 3d acceleration is best paraphrased as "maybe someday.
stop asking dammit.
"Yeah, it's pretty much Nvidia or something from someone else circa 2007, take your pick.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636610</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637094</id>
	<title>I don't understand.</title>
	<author>Anonymous</author>
	<datestamp>1269627480000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Why is Slashdot reporting on this? Did you tell me back in April 2000 that the Geforce 2 was the only card I'd ever need? This betrayal hurts, it really does.</p></htmltext>
<tokenext>Why is Slashdot reporting on this ?
Did you tell me back in April 2000 that the Geforce 2 was the only card I 'd ever need ?
This betrayal hurts , it really does .</tokentext>
<sentencetext>Why is Slashdot reporting on this?
Did you tell me back in April 2000 that the Geforce 2 was the only card I'd ever need?
This betrayal hurts, it really does.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636604</id>
	<title>Nvidia can only hope...</title>
	<author>fuzzyfuzzyfungus</author>
	<datestamp>1269623220000</datestamp>
	<modclass>Troll</modclass>
	<modscore>0</modscore>
	<htmltext>That there are a lot of lunatic performance enthusiasts and deep-pocketed GPU computing users out there. $500, <i>250 watts</i>, only modestly faster than the competitor's cheaper, cooler card that has been out for some months now, and has variants and cut-downs spanning more or less the entire price/performance spectrum from sub-$100 to mid $400s...<br> <br>

One cannot deny that they are, in fact, the fastest; but in all other respects they just got <i>owned</i>. More power draw than a <i>CPU</i> from the bad old days of Prescott(and Prescott was 90nm, this sucker is 40nm), a gigantic die that must cost a small fortune just to manufacture, hideously audible fan noise just to keep the thing from melting down. They'll have to cut the power draw by a factor of five to land any laptop design wins at all, a factor of ten for anything that isn't a 2.5 inch thick gamer box of a laptop.<br> <br>

Unless there is a large enough market of crazy gamers who just must have the fastest, or GPU computing people who don't care how expensive or noisy these cards are because they are in the datacenter doing some sort of algorithmic trading, Nvidia has a real loser on their hands...</htmltext>
<tokenext>That there are a lot of lunatic performance enthusiasts and deep-pocketed GPU computing users out there .
$ 500 , 250 watts , only modestly faster than the competitor 's cheaper , cooler card that has been out for some months now , and has variants and cut-downs spanning more or less the entire price/performance spectrum from sub- $ 100 to mid $ 400s.. . One can not deny that they are , in fact , the fastest ; but in all other respects they just got owned .
More power draw than a CPU from the bad old days of Prescott ( and Prescott was 90nm , this sucker is 40nm ) , a gigantic die that must cost a small fortune just to manufacture , hideously audible fan noise just to keep the thing from melting down .
They 'll have to cut the power draw by a factor of five to land any laptop design wins at all , a factor of ten for anything that is n't a 2.5 inch thick gamer box of a laptop .
Unless there is a large enough market of crazy gamers who just must have the fastest , or GPU computing people who do n't care how expensive or noisy these cards are because they are in the datacenter doing some sort of algorithmic trading , Nvidia has a real loser on their hands.. .</tokentext>
<sentencetext>That there are a lot of lunatic performance enthusiasts and deep-pocketed GPU computing users out there.
$500, 250 watts, only modestly faster than the competitor's cheaper, cooler card that has been out for some months now, and has variants and cut-downs spanning more or less the entire price/performance spectrum from sub-$100 to mid $400s... 

One cannot deny that they are, in fact, the fastest; but in all other respects they just got owned.
More power draw than a CPU from the bad old days of Prescott(and Prescott was 90nm, this sucker is 40nm), a gigantic die that must cost a small fortune just to manufacture, hideously audible fan noise just to keep the thing from melting down.
They'll have to cut the power draw by a factor of five to land any laptop design wins at all, a factor of ten for anything that isn't a 2.5 inch thick gamer box of a laptop.
Unless there is a large enough market of crazy gamers who just must have the fastest, or GPU computing people who don't care how expensive or noisy these cards are because they are in the datacenter doing some sort of algorithmic trading, Nvidia has a real loser on their hands...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636572</id>
	<title>Expensive, power hungry?</title>
	<author>Megahard</author>
	<datestamp>1269622980000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext>Sounds like the GF100 turned into the MRS100.</htmltext>
<tokenext>Sounds like the GF100 turned into the MRS100 .</tokentext>
<sentencetext>Sounds like the GF100 turned into the MRS100.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637604</id>
	<title>Basically an Epic Fail</title>
	<author>gweihir</author>
	<datestamp>1269723240000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>All their boasting cannot obscure that. Nvidia has nothing this market round. Maybe they will be back in the game next round, but only if they can moderate their arrogance and stop lying to their customers. Otherweise it looks like Nvidia may be going to be history with regard to the consumer market.</p><p>I certainly will not buy from them again after 2 failed GFX cards (the bump problem) and 1 failed mainboard (much too much heat), both from shoddy engineering on their part. It also seems that they have lost their edge on the driver side. I have now had several instances were Nvidia GFX crashed, while AMD GFX did not. May also be that Nvidia hardware is now so bad that the drivers cannot compensate anymore.</p></htmltext>
<tokenext>All their boasting can not obscure that .
Nvidia has nothing this market round .
Maybe they will be back in the game next round , but only if they can moderate their arrogance and stop lying to their customers .
Otherweise it looks like Nvidia may be going to be history with regard to the consumer market.I certainly will not buy from them again after 2 failed GFX cards ( the bump problem ) and 1 failed mainboard ( much too much heat ) , both from shoddy engineering on their part .
It also seems that they have lost their edge on the driver side .
I have now had several instances were Nvidia GFX crashed , while AMD GFX did not .
May also be that Nvidia hardware is now so bad that the drivers can not compensate anymore .</tokentext>
<sentencetext>All their boasting cannot obscure that.
Nvidia has nothing this market round.
Maybe they will be back in the game next round, but only if they can moderate their arrogance and stop lying to their customers.
Otherweise it looks like Nvidia may be going to be history with regard to the consumer market.I certainly will not buy from them again after 2 failed GFX cards (the bump problem) and 1 failed mainboard (much too much heat), both from shoddy engineering on their part.
It also seems that they have lost their edge on the driver side.
I have now had several instances were Nvidia GFX crashed, while AMD GFX did not.
May also be that Nvidia hardware is now so bad that the drivers cannot compensate anymore.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637322</id>
	<title>Crippled double precision, bleh.</title>
	<author>Anonymous</author>
	<datestamp>1269630960000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>The only real reason I wanted to get a Fermi / GTX480 card was to experiment with GPGPU and finally be able<br>to work at reasonable performance using double precision algorithms which my 8800GT won't do.<br>Now I find that they've crippled the double precision performance to something like 1/4th the hardware's actual capability<br>just to price gouge the developers that want that capability as opposed to just playing video games.<br>So as it stands the AMD 5870 is about 2/3rds the price or so and has 4x the double precision performance, runs a fair<br>bit cooler, and has been available for many months as well.  I think I'll have to pass on the Fermi / GTX4xx series cards<br>until they get to their senses and make a product that is fully competitive with the much older and much cheaper AMD<br>58xx series products performances in this regard.</p><p>I don't really see why NVIDIA would think that it is reasonable to cripple DP performance for market segmentation reasons<br>as if somehow DP wasn't a mainstream necessity for consumer and small business computing; every CPU out there has had a DP<br>FPU for decades now (and wouldn't have if it wasn't useful to absolutely ordinary tasks), and OpenCL / DirectCompute / HDR / etc. etc. are all technologies that very much benefit from DP that are being pushed heavily for mainstream multimedia, image processing, and ordinary PC application performance enhancement.</p><p>It is hardly esoteric HPC level stuff these days.  Actually the real question is why it has taken so long<br>to get quad precision / long double / whatever standardized into the computer languages / compilers (C, C++, CLR) and CPUs / GPUs, it would've been a logical progression around the time things went to 64 bit or earlier (for different but analogous reasons).</p><p>Now if only AMD's drivers and OpenCL implementations weren't quite so bad. . .</p></htmltext>
<tokenext>The only real reason I wanted to get a Fermi / GTX480 card was to experiment with GPGPU and finally be ableto work at reasonable performance using double precision algorithms which my 8800GT wo n't do.Now I find that they 've crippled the double precision performance to something like 1/4th the hardware 's actual capabilityjust to price gouge the developers that want that capability as opposed to just playing video games.So as it stands the AMD 5870 is about 2/3rds the price or so and has 4x the double precision performance , runs a fairbit cooler , and has been available for many months as well .
I think I 'll have to pass on the Fermi / GTX4xx series cardsuntil they get to their senses and make a product that is fully competitive with the much older and much cheaper AMD58xx series products performances in this regard.I do n't really see why NVIDIA would think that it is reasonable to cripple DP performance for market segmentation reasonsas if somehow DP was n't a mainstream necessity for consumer and small business computing ; every CPU out there has had a DPFPU for decades now ( and would n't have if it was n't useful to absolutely ordinary tasks ) , and OpenCL / DirectCompute / HDR / etc .
etc. are all technologies that very much benefit from DP that are being pushed heavily for mainstream multimedia , image processing , and ordinary PC application performance enhancement.It is hardly esoteric HPC level stuff these days .
Actually the real question is why it has taken so longto get quad precision / long double / whatever standardized into the computer languages / compilers ( C , C + + , CLR ) and CPUs / GPUs , it would 've been a logical progression around the time things went to 64 bit or earlier ( for different but analogous reasons ) .Now if only AMD 's drivers and OpenCL implementations were n't quite so bad .
. .</tokentext>
<sentencetext>The only real reason I wanted to get a Fermi / GTX480 card was to experiment with GPGPU and finally be ableto work at reasonable performance using double precision algorithms which my 8800GT won't do.Now I find that they've crippled the double precision performance to something like 1/4th the hardware's actual capabilityjust to price gouge the developers that want that capability as opposed to just playing video games.So as it stands the AMD 5870 is about 2/3rds the price or so and has 4x the double precision performance, runs a fairbit cooler, and has been available for many months as well.
I think I'll have to pass on the Fermi / GTX4xx series cardsuntil they get to their senses and make a product that is fully competitive with the much older and much cheaper AMD58xx series products performances in this regard.I don't really see why NVIDIA would think that it is reasonable to cripple DP performance for market segmentation reasonsas if somehow DP wasn't a mainstream necessity for consumer and small business computing; every CPU out there has had a DPFPU for decades now (and wouldn't have if it wasn't useful to absolutely ordinary tasks), and OpenCL / DirectCompute / HDR / etc.
etc. are all technologies that very much benefit from DP that are being pushed heavily for mainstream multimedia, image processing, and ordinary PC application performance enhancement.It is hardly esoteric HPC level stuff these days.
Actually the real question is why it has taken so longto get quad precision / long double / whatever standardized into the computer languages / compilers (C, C++, CLR) and CPUs / GPUs, it would've been a logical progression around the time things went to 64 bit or earlier (for different but analogous reasons).Now if only AMD's drivers and OpenCL implementations weren't quite so bad.
. .</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636580</id>
	<title>Fermi needs a refresh or v2</title>
	<author>Artem Tashkinov</author>
	<datestamp>1269623100000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext>To summarize Fermi paper launch:
<ul>
<li>Fermi is a damn hot and noisy beast</li>
<li>Fermi is more expensive and only slightly faster than the respective ATI Radeon cards, thus DAAMIT will not cut prices for Radeons in the nearest future</li>
<li>Punters will have to wait at least for two weeks for general availability</li>
<li>Fermi desperately needs a reboot/refresh/whatever to attract masses</li>
</ul><p>
It seems like NVIDIA has fallen into the same trap as with GeForce 5XXX generation launch.</p></htmltext>
<tokenext>To summarize Fermi paper launch : Fermi is a damn hot and noisy beast Fermi is more expensive and only slightly faster than the respective ATI Radeon cards , thus DAAMIT will not cut prices for Radeons in the nearest future Punters will have to wait at least for two weeks for general availability Fermi desperately needs a reboot/refresh/whatever to attract masses It seems like NVIDIA has fallen into the same trap as with GeForce 5XXX generation launch .</tokentext>
<sentencetext>To summarize Fermi paper launch:

Fermi is a damn hot and noisy beast
Fermi is more expensive and only slightly faster than the respective ATI Radeon cards, thus DAAMIT will not cut prices for Radeons in the nearest future
Punters will have to wait at least for two weeks for general availability
Fermi desperately needs a reboot/refresh/whatever to attract masses

It seems like NVIDIA has fallen into the same trap as with GeForce 5XXX generation launch.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637102</id>
	<title>This is why we need the on-live service to succeed</title>
	<author>Flentil</author>
	<datestamp>1269627600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>These new cards, as usual, are way too expensive.  I had the best video card when Doom3 came out.  Since then I've upgraded once, and need a whole new motherboard, CPU, and RAM before I can upgrade to a newer card.  This is why people turn to consoles.  This is what's killing PC gaming.  I really hope on-live works out, as I see it as the ultimate solution to this problem without having to resort to an xbox/ps3.</htmltext>
<tokenext>These new cards , as usual , are way too expensive .
I had the best video card when Doom3 came out .
Since then I 've upgraded once , and need a whole new motherboard , CPU , and RAM before I can upgrade to a newer card .
This is why people turn to consoles .
This is what 's killing PC gaming .
I really hope on-live works out , as I see it as the ultimate solution to this problem without having to resort to an xbox/ps3 .</tokentext>
<sentencetext>These new cards, as usual, are way too expensive.
I had the best video card when Doom3 came out.
Since then I've upgraded once, and need a whole new motherboard, CPU, and RAM before I can upgrade to a newer card.
This is why people turn to consoles.
This is what's killing PC gaming.
I really hope on-live works out, as I see it as the ultimate solution to this problem without having to resort to an xbox/ps3.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636512</id>
	<title>I'm really not impressed.</title>
	<author>Anonymous</author>
	<datestamp>1269622440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>They caught up with ATI but with a more expensive, hotter and more<br>power hungry card.</p></htmltext>
<tokenext>They caught up with ATI but with a more expensive , hotter and morepower hungry card .</tokentext>
<sentencetext>They caught up with ATI but with a more expensive, hotter and morepower hungry card.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31641962</id>
	<title>Re:Bleeding edge isn't usually worth it</title>
	<author>Gunslinger47</author>
	<datestamp>1269722460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I bought two 7800GTX video cards five years ago for the same price as two GTX480s will cost.  Mass Effect 2 still runs fine on them at high settings.  (Though the loading screens murder my single-core.)
</p><p>
Spending $1000 back then was way better than spending $200 on a brand new video card every year.  If I did that, I'd only have a GTX260 right now...  Wait a second.
</p></htmltext>
<tokenext>I bought two 7800GTX video cards five years ago for the same price as two GTX480s will cost .
Mass Effect 2 still runs fine on them at high settings .
( Though the loading screens murder my single-core .
) Spending $ 1000 back then was way better than spending $ 200 on a brand new video card every year .
If I did that , I 'd only have a GTX260 right now... Wait a second .</tokentext>
<sentencetext>I bought two 7800GTX video cards five years ago for the same price as two GTX480s will cost.
Mass Effect 2 still runs fine on them at high settings.
(Though the loading screens murder my single-core.
)

Spending $1000 back then was way better than spending $200 on a brand new video card every year.
If I did that, I'd only have a GTX260 right now...  Wait a second.
</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636820</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31638188</id>
	<title>Re:$1000 for graphics</title>
	<author>crossmr</author>
	<datestamp>1269689760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Sure, I'll sit in front of your machine and stop you from playing it until you pay me $4000.</p></htmltext>
<tokenext>Sure , I 'll sit in front of your machine and stop you from playing it until you pay me $ 4000 .</tokentext>
<sentencetext>Sure, I'll sit in front of your machine and stop you from playing it until you pay me $4000.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636556</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637214</id>
	<title>I want</title>
	<author>Anonymous</author>
	<datestamp>1269629220000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>a 40nm 9800GT with 80W TPD.  The 9800 is fast enough for my needs and has been for 2 years now.  Less heat.  Less power.  Less noise.  A 150W video card has absolutely no appeal to me.</p></htmltext>
<tokenext>a 40nm 9800GT with 80W TPD .
The 9800 is fast enough for my needs and has been for 2 years now .
Less heat .
Less power .
Less noise .
A 150W video card has absolutely no appeal to me .</tokentext>
<sentencetext>a 40nm 9800GT with 80W TPD.
The 9800 is fast enough for my needs and has been for 2 years now.
Less heat.
Less power.
Less noise.
A 150W video card has absolutely no appeal to me.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31640804</id>
	<title>Re:Fermi needs a refresh or v2</title>
	<author>TheKidWho</author>
	<datestamp>1269714000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>There is a significant difference in architecture between the two units.  If all you are doing is integer operations, then the Radeon will be faster indeed since it contains more execution units.</p></htmltext>
<tokenext>There is a significant difference in architecture between the two units .
If all you are doing is integer operations , then the Radeon will be faster indeed since it contains more execution units .</tokentext>
<sentencetext>There is a significant difference in architecture between the two units.
If all you are doing is integer operations, then the Radeon will be faster indeed since it contains more execution units.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637952</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636948</id>
	<title>hello Nvidia GrillForce</title>
	<author>distantbody</author>
	<datestamp>1269625920000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext>It's about time a product acknowledged my desktop grilling needs.</htmltext>
<tokenext>It 's about time a product acknowledged my desktop grilling needs .</tokentext>
<sentencetext>It's about time a product acknowledged my desktop grilling needs.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637588</id>
	<title>Minimum Framerates and Graphics Lag spikes</title>
	<author>moozoo</author>
	<datestamp>1269723000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I think Fermi can be summed up with the comments near the bottom of the Crysis Warhead benchmark in the review done by AnandTech.

"The GTX 400 series completely tramples the 5000 series when it comes to minimum framerates, far more than we would have expected. "

Fermi is a mac truck that ploughs though the tougher scenes.

There is nothing worst than having smoke, explosions, and water falls etc causing graphics spikes.</htmltext>
<tokenext>I think Fermi can be summed up with the comments near the bottom of the Crysis Warhead benchmark in the review done by AnandTech .
" The GTX 400 series completely tramples the 5000 series when it comes to minimum framerates , far more than we would have expected .
" Fermi is a mac truck that ploughs though the tougher scenes .
There is nothing worst than having smoke , explosions , and water falls etc causing graphics spikes .</tokentext>
<sentencetext>I think Fermi can be summed up with the comments near the bottom of the Crysis Warhead benchmark in the review done by AnandTech.
"The GTX 400 series completely tramples the 5000 series when it comes to minimum framerates, far more than we would have expected.
"

Fermi is a mac truck that ploughs though the tougher scenes.
There is nothing worst than having smoke, explosions, and water falls etc causing graphics spikes.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31639110</id>
	<title>Re:$1000 for graphics</title>
	<author>Anonymous</author>
	<datestamp>1269701580000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>In a time of economic hardship and a drive for less power hungry devices, Nvidia presents us with a $1000 grill. And the games you play on it will be just as entertaining as those of ten years ago, just looking slightly better. I guess it's progress of a sort.</p></htmltext>
<tokenext>In a time of economic hardship and a drive for less power hungry devices , Nvidia presents us with a $ 1000 grill .
And the games you play on it will be just as entertaining as those of ten years ago , just looking slightly better .
I guess it 's progress of a sort .</tokentext>
<sentencetext>In a time of economic hardship and a drive for less power hungry devices, Nvidia presents us with a $1000 grill.
And the games you play on it will be just as entertaining as those of ten years ago, just looking slightly better.
I guess it's progress of a sort.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636556</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636788</id>
	<title>Modern Engineering</title>
	<author>lightrush</author>
	<datestamp>1269624720000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext>What do you know, heaters for PCIe come with moderately fast GPUs onboard these days!</htmltext>
<tokenext>What do you know , heaters for PCIe come with moderately fast GPUs onboard these days !</tokentext>
<sentencetext>What do you know, heaters for PCIe come with moderately fast GPUs onboard these days!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636802</id>
	<title>Anand Tech Review</title>
	<author>alvinrod</author>
	<datestamp>1269624840000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext>There's also an <a href="http://www.anandtech.com/video/showdoc.aspx?i=3783&amp;p=1" title="anandtech.com">Anand Tech</a> [anandtech.com] review which is pretty good and has plenty of different benchmarks. It has the added benefit of testing a 480 SLI configuration which produces some interesting results. It also presents some benchmarks that help to show off nVidia's GPGPU performance as well, which is something that they've been using to hype these new cards.
<br> <br>
In my own opinion, ATI still has a competitive advantage, especially considering that they can always drop their price if they feel threatened. nVidia is lucky that they have the ION and Tegra to fall back on, because it doesn't seems as though they don't have a pot to piss in right now in terms of high-end desktop graphics offerings. The 480 seems to be about equal to similarly priced ATI offerings and doesn't give them the edge in performance that they're accustomed to having.</htmltext>
<tokenext>There 's also an Anand Tech [ anandtech.com ] review which is pretty good and has plenty of different benchmarks .
It has the added benefit of testing a 480 SLI configuration which produces some interesting results .
It also presents some benchmarks that help to show off nVidia 's GPGPU performance as well , which is something that they 've been using to hype these new cards .
In my own opinion , ATI still has a competitive advantage , especially considering that they can always drop their price if they feel threatened .
nVidia is lucky that they have the ION and Tegra to fall back on , because it does n't seems as though they do n't have a pot to piss in right now in terms of high-end desktop graphics offerings .
The 480 seems to be about equal to similarly priced ATI offerings and does n't give them the edge in performance that they 're accustomed to having .</tokentext>
<sentencetext>There's also an Anand Tech [anandtech.com] review which is pretty good and has plenty of different benchmarks.
It has the added benefit of testing a 480 SLI configuration which produces some interesting results.
It also presents some benchmarks that help to show off nVidia's GPGPU performance as well, which is something that they've been using to hype these new cards.
In my own opinion, ATI still has a competitive advantage, especially considering that they can always drop their price if they feel threatened.
nVidia is lucky that they have the ION and Tegra to fall back on, because it doesn't seems as though they don't have a pot to piss in right now in terms of high-end desktop graphics offerings.
The 480 seems to be about equal to similarly priced ATI offerings and doesn't give them the edge in performance that they're accustomed to having.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637952</id>
	<title>Re:Fermi needs a refresh or v2</title>
	<author>JuniorJack</author>
	<datestamp>1269686460000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>Not to mention the overwhelming lead Nvidia has with GPGPU currently.</p></div><p>We are using GPU's for a number crunching tasks - integer operations. Currently one 5970 (aircooled) outperforms<br>a computer with 4 x GTX 295, watercooled and overclocked to 725 Mhz each.</p><p>NVIDIA has to do really much better with those new cards to win us back</p></div>
	</htmltext>
<tokenext>Not to mention the overwhelming lead Nvidia has with GPGPU currently.We are using GPU 's for a number crunching tasks - integer operations .
Currently one 5970 ( aircooled ) outperformsa computer with 4 x GTX 295 , watercooled and overclocked to 725 Mhz each.NVIDIA has to do really much better with those new cards to win us back</tokentext>
<sentencetext>Not to mention the overwhelming lead Nvidia has with GPGPU currently.We are using GPU's for a number crunching tasks - integer operations.
Currently one 5970 (aircooled) outperformsa computer with 4 x GTX 295, watercooled and overclocked to 725 Mhz each.NVIDIA has to do really much better with those new cards to win us back
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637002</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637002</id>
	<title>Re:Fermi needs a refresh or v2</title>
	<author>TheKidWho</author>
	<datestamp>1269626520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>No, I wouldn't compare this to the 5xxx at all.  Especially considering the Nvidia cards whoop the Radeons in tessellation and geometry operations.</p><p>Not to mention the overwhelming lead Nvidia has with GPGPU currently.</p></htmltext>
<tokenext>No , I would n't compare this to the 5xxx at all .
Especially considering the Nvidia cards whoop the Radeons in tessellation and geometry operations.Not to mention the overwhelming lead Nvidia has with GPGPU currently .</tokentext>
<sentencetext>No, I wouldn't compare this to the 5xxx at all.
Especially considering the Nvidia cards whoop the Radeons in tessellation and geometry operations.Not to mention the overwhelming lead Nvidia has with GPGPU currently.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636580</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636518</id>
	<title>Will it run Linux?</title>
	<author>Anonymous</author>
	<datestamp>1269622440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Seriously, that whole thing about drivers earlier makes me wonder if it's worth it to buy this beef without any way to make it sizzle.</p></htmltext>
<tokenext>Seriously , that whole thing about drivers earlier makes me wonder if it 's worth it to buy this beef without any way to make it sizzle .</tokentext>
<sentencetext>Seriously, that whole thing about drivers earlier makes me wonder if it's worth it to buy this beef without any way to make it sizzle.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31640514</id>
	<title>Re:Fermi needs a refresh or v2</title>
	<author>Suiggy</author>
	<datestamp>1269712200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>GTX470 and GTX480 will support OpenGL 4.0 when it ships in a couple of weeks.

AMD/ATI released their OpenGL 4.0 drivers for HD5000 series cards yesterdya.</htmltext>
<tokenext>GTX470 and GTX480 will support OpenGL 4.0 when it ships in a couple of weeks .
AMD/ATI released their OpenGL 4.0 drivers for HD5000 series cards yesterdya .</tokentext>
<sentencetext>GTX470 and GTX480 will support OpenGL 4.0 when it ships in a couple of weeks.
AMD/ATI released their OpenGL 4.0 drivers for HD5000 series cards yesterdya.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31638532</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637442</id>
	<title>Re:Bleeding edge isn't usually worth it</title>
	<author>tirefire</author>
	<datestamp>1269720000000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p><div class="quote"><p>I mean look at it like this. You can probably get a card for $120-$150 now that will probably run every current game well right now. (Well except for Crisis)</p></div><p>Crysis came out in Q3 2007.  It's not really a current game anymore.  Its use as a benchmark for video card performance is frustrating because it's an incredibly inefficient game engine.  Don't get me wrong, it looks beautiful... but so do games that will run at twice the frame rate on the same system.</p><p><div class="quote"><p>So my experience is just buy a decent card ($120-$150) and in a few years buy another one and do whatever with the old one. (Sell it, give it to a family member whatever.)</p></div><p>Right on.  This is what I used to do until spring of 2007, when I bought an nVidia 8800 GTS 320 MB to play STALKER.  That card continues to serve me well with any game I throw at it.  I was expecting to need to upgrade it in 2009, but I never did... new games kept running great on it.  I've had that card for almost exactly THREE YEARS now and it still amazes me.  I've never had any piece of computing hardware that did that.<br> <br>

Changes in graphics card features and speed were really taking place at a white-hot pace between about 2003 and 2007.  Those years saw the introduction of cards like the Radeon 9800, the GeForce 6800, and the GF 8800.  All of those cards totally smashed their predecessors (from both nVidia AND ATI) in benchmarks.  It was even more amazing than the CPU world from 1999 to 2004, when clock rates where shooting through the roof and when AMD embarrassed Intel with the introduction of the 64-bit hammer core (Athlon 64).</p></div>
	</htmltext>
<tokenext>I mean look at it like this .
You can probably get a card for $ 120- $ 150 now that will probably run every current game well right now .
( Well except for Crisis ) Crysis came out in Q3 2007 .
It 's not really a current game anymore .
Its use as a benchmark for video card performance is frustrating because it 's an incredibly inefficient game engine .
Do n't get me wrong , it looks beautiful... but so do games that will run at twice the frame rate on the same system.So my experience is just buy a decent card ( $ 120- $ 150 ) and in a few years buy another one and do whatever with the old one .
( Sell it , give it to a family member whatever .
) Right on .
This is what I used to do until spring of 2007 , when I bought an nVidia 8800 GTS 320 MB to play STALKER .
That card continues to serve me well with any game I throw at it .
I was expecting to need to upgrade it in 2009 , but I never did... new games kept running great on it .
I 've had that card for almost exactly THREE YEARS now and it still amazes me .
I 've never had any piece of computing hardware that did that .
Changes in graphics card features and speed were really taking place at a white-hot pace between about 2003 and 2007 .
Those years saw the introduction of cards like the Radeon 9800 , the GeForce 6800 , and the GF 8800 .
All of those cards totally smashed their predecessors ( from both nVidia AND ATI ) in benchmarks .
It was even more amazing than the CPU world from 1999 to 2004 , when clock rates where shooting through the roof and when AMD embarrassed Intel with the introduction of the 64-bit hammer core ( Athlon 64 ) .</tokentext>
<sentencetext>I mean look at it like this.
You can probably get a card for $120-$150 now that will probably run every current game well right now.
(Well except for Crisis)Crysis came out in Q3 2007.
It's not really a current game anymore.
Its use as a benchmark for video card performance is frustrating because it's an incredibly inefficient game engine.
Don't get me wrong, it looks beautiful... but so do games that will run at twice the frame rate on the same system.So my experience is just buy a decent card ($120-$150) and in a few years buy another one and do whatever with the old one.
(Sell it, give it to a family member whatever.
)Right on.
This is what I used to do until spring of 2007, when I bought an nVidia 8800 GTS 320 MB to play STALKER.
That card continues to serve me well with any game I throw at it.
I was expecting to need to upgrade it in 2009, but I never did... new games kept running great on it.
I've had that card for almost exactly THREE YEARS now and it still amazes me.
I've never had any piece of computing hardware that did that.
Changes in graphics card features and speed were really taking place at a white-hot pace between about 2003 and 2007.
Those years saw the introduction of cards like the Radeon 9800, the GeForce 6800, and the GF 8800.
All of those cards totally smashed their predecessors (from both nVidia AND ATI) in benchmarks.
It was even more amazing than the CPU world from 1999 to 2004, when clock rates where shooting through the roof and when AMD embarrassed Intel with the introduction of the 64-bit hammer core (Athlon 64).
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636820</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637492</id>
	<title>Re:$1000 for graphics</title>
	<author>Anonymous</author>
	<datestamp>1269721080000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Would you like a <a href="http://www.newegg.com/Product/Product.aspx?Item=N82E16819115223&amp;cm\_re=6\_core-\_-19-115-223-\_-Product" title="newegg.com" rel="nofollow">$1130 CPU</a> [newegg.com] with that?</p></htmltext>
<tokenext>Would you like a $ 1130 CPU [ newegg.com ] with that ?</tokentext>
<sentencetext>Would you like a $1130 CPU [newegg.com] with that?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636556</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31646078</id>
	<title>Re:Fermi needs a refresh or v2</title>
	<author>BikeHelmet</author>
	<datestamp>1269771000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Hey, you missed some!</p><p>-Fermi excels at Tessellation (which isn't used in any current games)<br>-Fermi is great for GPGPU computing!</p><p>They <i>did</i> say this generation was going to focus more on scientists... about 5 months ago.</p></htmltext>
<tokenext>Hey , you missed some ! -Fermi excels at Tessellation ( which is n't used in any current games ) -Fermi is great for GPGPU computing ! They did say this generation was going to focus more on scientists... about 5 months ago .</tokentext>
<sentencetext>Hey, you missed some!-Fermi excels at Tessellation (which isn't used in any current games)-Fermi is great for GPGPU computing!They did say this generation was going to focus more on scientists... about 5 months ago.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636580</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636556</id>
	<title>$1000 for graphics</title>
	<author>tpstigers</author>
	<datestamp>1269622800000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext>Come on - is that all?  There HAS to be a way I can spend 5 times that to play a video game.</htmltext>
<tokenext>Come on - is that all ?
There HAS to be a way I can spend 5 times that to play a video game .</tokentext>
<sentencetext>Come on - is that all?
There HAS to be a way I can spend 5 times that to play a video game.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637236</id>
	<title>Re:This is why we need the on-live service to succ</title>
	<author>anss123</author>
	<datestamp>1269629580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>You don't have to get the absolute best ya know? OnLive - a youtube like gaming service - is unlikely to give you a better gaming experience than a $70 graphic card. If you got to have the absolute best graphics out there then the PS360 is already getting long in the tooth and MS/Sony is fretting more about their Wii inspired controllers than graphics these days.</htmltext>
<tokenext>You do n't have to get the absolute best ya know ?
OnLive - a youtube like gaming service - is unlikely to give you a better gaming experience than a $ 70 graphic card .
If you got to have the absolute best graphics out there then the PS360 is already getting long in the tooth and MS/Sony is fretting more about their Wii inspired controllers than graphics these days .</tokentext>
<sentencetext>You don't have to get the absolute best ya know?
OnLive - a youtube like gaming service - is unlikely to give you a better gaming experience than a $70 graphic card.
If you got to have the absolute best graphics out there then the PS360 is already getting long in the tooth and MS/Sony is fretting more about their Wii inspired controllers than graphics these days.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637102</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31641210</id>
	<title>Re:No source engine benchmarks?</title>
	<author>Anonymous</author>
	<datestamp>1269716640000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>Just about any card above $100 will run all of Valve's games maxed out. Benchmarks that show Valve's games are pretty pointless.</p></htmltext>
<tokenext>Just about any card above $ 100 will run all of Valve 's games maxed out .
Benchmarks that show Valve 's games are pretty pointless .</tokentext>
<sentencetext>Just about any card above $100 will run all of Valve's games maxed out.
Benchmarks that show Valve's games are pretty pointless.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31638838</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31639062
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637642
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636580
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31640804
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637952
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637002
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636580
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637804
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636616
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637286
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636616
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31658736
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637214
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31639194
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637322
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637236
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637102
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31640470
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637642
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636580
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31642616
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636616
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31638206
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636610
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31640514
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31638532
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636580
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31643772
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637642
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636580
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637442
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636820
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637492
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636556
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31641210
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31638838
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637978
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637338
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637102
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31642848
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637940
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637508
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637338
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637102
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637088
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636970
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636610
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636842
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636782
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636580
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31641264
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31638374
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636556
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31641962
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636820
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31646078
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636580
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31641278
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637940
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636888
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636556
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637328
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636820
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31638188
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636556
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637546
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637338
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637102
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637276
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637102
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31642584
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637642
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636580
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31643782
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637338
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637102
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_27_0118233_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31639110
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636556
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_27_0118233.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636538
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_27_0118233.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31638838
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31641210
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_27_0118233.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636820
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31641962
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637328
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637442
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_27_0118233.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637940
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31641278
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31642848
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_27_0118233.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636610
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636970
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637088
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31638206
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_27_0118233.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636616
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637286
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637804
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31642616
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_27_0118233.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636518
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_27_0118233.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636580
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637002
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637952
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31640804
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637642
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31639062
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31642584
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31640470
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31643772
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636782
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636842
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31638532
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31640514
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31646078
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_27_0118233.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636604
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_27_0118233.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636802
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_27_0118233.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637102
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637276
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637338
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637978
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637508
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637546
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31643782
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637236
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_27_0118233.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636572
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_27_0118233.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636512
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_27_0118233.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637322
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31639194
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_27_0118233.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637214
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31658736
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_27_0118233.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636556
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31638374
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31641264
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31639110
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31637492
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31638188
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_27_0118233.31636888
</commentlist>
</conversation>
