<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_06_19_2331209</id>
	<title>SLI On Life Support For the AMD Platform</title>
	<author>Soulskill</author>
	<datestamp>1245420060000</datestamp>
	<htmltext><a href="http://www.penstarsys.com/" rel="nofollow">JoshMST</a> writes <i>"For years AMD and Nvidia were like peas and carrots, and their SNAP partnership proved to be quite successful for both companies.  Things changed dramatically when AMD bought up ATI, and now it seems like Nvidia is <a href="http://www.pcper.com/article.php?aid=734">pulling the plug on SLI support for the AMD platform</a>.  While the chipset division at AMD may be a bitter rival to Nvidia, the CPU guys there have had a long and prosperous relationship with the Green Machine.  While declining chipset margins on the AMD side was attributed to AMD's lackluster processor offerings for the past several years, the Phenom II chips have reawakened interest in the platform and they have found a place in enthusiasts' hearts again.  Unfortunately for Nvidia, they are seemingly missing out on a significant revenue stream by not offering new chipsets to go with these processors.  They have also curtailed SLI adoption on the AMD platform as well, which couldn't be happening at a worse time."</i></htmltext>
<tokenext>JoshMST writes " For years AMD and Nvidia were like peas and carrots , and their SNAP partnership proved to be quite successful for both companies .
Things changed dramatically when AMD bought up ATI , and now it seems like Nvidia is pulling the plug on SLI support for the AMD platform .
While the chipset division at AMD may be a bitter rival to Nvidia , the CPU guys there have had a long and prosperous relationship with the Green Machine .
While declining chipset margins on the AMD side was attributed to AMD 's lackluster processor offerings for the past several years , the Phenom II chips have reawakened interest in the platform and they have found a place in enthusiasts ' hearts again .
Unfortunately for Nvidia , they are seemingly missing out on a significant revenue stream by not offering new chipsets to go with these processors .
They have also curtailed SLI adoption on the AMD platform as well , which could n't be happening at a worse time .
"</tokentext>
<sentencetext>JoshMST writes "For years AMD and Nvidia were like peas and carrots, and their SNAP partnership proved to be quite successful for both companies.
Things changed dramatically when AMD bought up ATI, and now it seems like Nvidia is pulling the plug on SLI support for the AMD platform.
While the chipset division at AMD may be a bitter rival to Nvidia, the CPU guys there have had a long and prosperous relationship with the Green Machine.
While declining chipset margins on the AMD side was attributed to AMD's lackluster processor offerings for the past several years, the Phenom II chips have reawakened interest in the platform and they have found a place in enthusiasts' hearts again.
Unfortunately for Nvidia, they are seemingly missing out on a significant revenue stream by not offering new chipsets to go with these processors.
They have also curtailed SLI adoption on the AMD platform as well, which couldn't be happening at a worse time.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28406063</id>
	<title>nVidia's chipset business is dead</title>
	<author>Alereon</author>
	<datestamp>1245504540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>There's no legitimate reason for nVidia to continue to make chipsets, and that's why they're exiting the market. SLI only requires nVidia chipsets due to driver and BIOS lockouts, the Intel X58 chipset proves that Crossfire and SLI can co-exist on a platform without any nVidia hardware whatsoever. In the integrated market both AMD and Intel have compelling onboard graphics that can scale well with the addition of one or two GPUs.</p><p>Now that AMD and Intel have competitive chipset offerings and nVidia doesn't need the chipset business to keep propping up GPU sales via SLI, there's no reason for them to keep fighting for a piece of a not very profitable market. If they exit the desktop/laptop/netbook chipset market with the current generation and refocus on GPU and embedded development, that improves their financial outlook.</p><p>It seems like nVidia is in a very precarious position right now, their GTX200-series GPUs are expensive and power-hungry compared to their AMD competitors, and it looks like AMD is going to execute a die-shrink to 40nm and an architecture change to DirectX11 with their new Evergreen GPUs approximately 6-9 months before nVidia makes a similar transition. This means that for about half a year, AMD is going to have significantly faster, more energy efficient GPUs that cost a fraction of what the comparable nVidia GPUs do to manufacture. nVidia needs to not run out of money before they manage to get their GPU out and capitalize on all their R&amp;D investments.</p></htmltext>
<tokenext>There 's no legitimate reason for nVidia to continue to make chipsets , and that 's why they 're exiting the market .
SLI only requires nVidia chipsets due to driver and BIOS lockouts , the Intel X58 chipset proves that Crossfire and SLI can co-exist on a platform without any nVidia hardware whatsoever .
In the integrated market both AMD and Intel have compelling onboard graphics that can scale well with the addition of one or two GPUs.Now that AMD and Intel have competitive chipset offerings and nVidia does n't need the chipset business to keep propping up GPU sales via SLI , there 's no reason for them to keep fighting for a piece of a not very profitable market .
If they exit the desktop/laptop/netbook chipset market with the current generation and refocus on GPU and embedded development , that improves their financial outlook.It seems like nVidia is in a very precarious position right now , their GTX200-series GPUs are expensive and power-hungry compared to their AMD competitors , and it looks like AMD is going to execute a die-shrink to 40nm and an architecture change to DirectX11 with their new Evergreen GPUs approximately 6-9 months before nVidia makes a similar transition .
This means that for about half a year , AMD is going to have significantly faster , more energy efficient GPUs that cost a fraction of what the comparable nVidia GPUs do to manufacture .
nVidia needs to not run out of money before they manage to get their GPU out and capitalize on all their R&amp;D investments .</tokentext>
<sentencetext>There's no legitimate reason for nVidia to continue to make chipsets, and that's why they're exiting the market.
SLI only requires nVidia chipsets due to driver and BIOS lockouts, the Intel X58 chipset proves that Crossfire and SLI can co-exist on a platform without any nVidia hardware whatsoever.
In the integrated market both AMD and Intel have compelling onboard graphics that can scale well with the addition of one or two GPUs.Now that AMD and Intel have competitive chipset offerings and nVidia doesn't need the chipset business to keep propping up GPU sales via SLI, there's no reason for them to keep fighting for a piece of a not very profitable market.
If they exit the desktop/laptop/netbook chipset market with the current generation and refocus on GPU and embedded development, that improves their financial outlook.It seems like nVidia is in a very precarious position right now, their GTX200-series GPUs are expensive and power-hungry compared to their AMD competitors, and it looks like AMD is going to execute a die-shrink to 40nm and an architecture change to DirectX11 with their new Evergreen GPUs approximately 6-9 months before nVidia makes a similar transition.
This means that for about half a year, AMD is going to have significantly faster, more energy efficient GPUs that cost a fraction of what the comparable nVidia GPUs do to manufacture.
nVidia needs to not run out of money before they manage to get their GPU out and capitalize on all their R&amp;D investments.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398757</id>
	<title>Re:amd</title>
	<author>Anonymous</author>
	<datestamp>1245426960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I hope this comment is in jest pointing out that people have been claiming that AMD is in trouble for years and despote those prognostications AMD continues to carry on.
<br>Either that you're totally wrong. AMD's not the GM of the PC industry.</htmltext>
<tokenext>I hope this comment is in jest pointing out that people have been claiming that AMD is in trouble for years and despote those prognostications AMD continues to carry on .
Either that you 're totally wrong .
AMD 's not the GM of the PC industry .</tokentext>
<sentencetext>I hope this comment is in jest pointing out that people have been claiming that AMD is in trouble for years and despote those prognostications AMD continues to carry on.
Either that you're totally wrong.
AMD's not the GM of the PC industry.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398561</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398815</id>
	<title>Re:I don't know but...ONE CRUCIAL WORD MISSING</title>
	<author>Nom du Keyboard</author>
	<datestamp>1245427860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>Nvidia and Intel became the other</p></div></blockquote><p>
You forgot one important thing: <b>Larrabee.</b></p></div>
	</htmltext>
<tokenext>Nvidia and Intel became the other You forgot one important thing : Larrabee .</tokentext>
<sentencetext>Nvidia and Intel became the other
You forgot one important thing: Larrabee.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398489</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28405849</id>
	<title>Re:isn't sli just bs tech designed to sell more ca</title>
	<author>averner</author>
	<datestamp>1245503040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Not necessarily.<br> <br>

Not sure about NVidia's card's, but right now two of AMD's 4850s are cheaper than and just as fast as a single 4890.  It's the best deal around the $220 price point.<br> <br>

Source: <a href="http://www.tomshardware.com/reviews/radeon-geforce-price,2323-4.html" title="tomshardware.com">http://www.tomshardware.com/reviews/radeon-geforce-price,2323-4.html</a> [tomshardware.com]</htmltext>
<tokenext>Not necessarily .
Not sure about NVidia 's card 's , but right now two of AMD 's 4850s are cheaper than and just as fast as a single 4890 .
It 's the best deal around the $ 220 price point .
Source : http : //www.tomshardware.com/reviews/radeon-geforce-price,2323-4.html [ tomshardware.com ]</tokentext>
<sentencetext>Not necessarily.
Not sure about NVidia's card's, but right now two of AMD's 4850s are cheaper than and just as fast as a single 4890.
It's the best deal around the $220 price point.
Source: http://www.tomshardware.com/reviews/radeon-geforce-price,2323-4.html [tomshardware.com]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399173</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28400019</id>
	<title>Re:Who cares?</title>
	<author>Anonymous</author>
	<datestamp>1245531420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I don't think SLI really has anything to do with OpenCL.  You just need the physical slot - not the magic that tells one card how to properly synchronize frames rendered.  If your dataset is parallel enough to work well for OpenCL I don't see why it shouldn't propagate across a second card without issue.</p></htmltext>
<tokenext>I do n't think SLI really has anything to do with OpenCL .
You just need the physical slot - not the magic that tells one card how to properly synchronize frames rendered .
If your dataset is parallel enough to work well for OpenCL I do n't see why it should n't propagate across a second card without issue .</tokentext>
<sentencetext>I don't think SLI really has anything to do with OpenCL.
You just need the physical slot - not the magic that tells one card how to properly synchronize frames rendered.
If your dataset is parallel enough to work well for OpenCL I don't see why it shouldn't propagate across a second card without issue.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398881</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398829</id>
	<title>Re:Who cares?</title>
	<author>Repossessed</author>
	<datestamp>1245428040000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Not only that, but SLI in the specific is so bad that dual card setups are one of the few places you actually want to have ATI over nVidia.</p></htmltext>
<tokenext>Not only that , but SLI in the specific is so bad that dual card setups are one of the few places you actually want to have ATI over nVidia .</tokentext>
<sentencetext>Not only that, but SLI in the specific is so bad that dual card setups are one of the few places you actually want to have ATI over nVidia.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398539</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398495</id>
	<title>Talk about stupid</title>
	<author>Hansele</author>
	<datestamp>1245424260000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext>Why on earth if you're NVIDIA do you make it harder to find mainboards to leverage your tech?  I'd have expected this move by AMD first, you'd think NVIDIA would be wanting to have their tech available everyplace possible.</htmltext>
<tokenext>Why on earth if you 're NVIDIA do you make it harder to find mainboards to leverage your tech ?
I 'd have expected this move by AMD first , you 'd think NVIDIA would be wanting to have their tech available everyplace possible .</tokentext>
<sentencetext>Why on earth if you're NVIDIA do you make it harder to find mainboards to leverage your tech?
I'd have expected this move by AMD first, you'd think NVIDIA would be wanting to have their tech available everyplace possible.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399291</id>
	<title>Single card, Dual GPU</title>
	<author>dr\_wheel</author>
	<datestamp>1245433560000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext>There are single-slot dual GPU offerings from both camps. If you actually need/want SLI/CrossFire, what's the point of running 2 cards when you can have 1?</htmltext>
<tokenext>There are single-slot dual GPU offerings from both camps .
If you actually need/want SLI/CrossFire , what 's the point of running 2 cards when you can have 1 ?</tokentext>
<sentencetext>There are single-slot dual GPU offerings from both camps.
If you actually need/want SLI/CrossFire, what's the point of running 2 cards when you can have 1?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28401367</id>
	<title>Re:Who CARES about SLI?</title>
	<author>minorproblem</author>
	<datestamp>1245508920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I've worked out its cheapest to just buy the best bang for buck graphics card each year than it is to get the best card and hold onto it.</htmltext>
<tokenext>I 've worked out its cheapest to just buy the best bang for buck graphics card each year than it is to get the best card and hold onto it .</tokentext>
<sentencetext>I've worked out its cheapest to just buy the best bang for buck graphics card each year than it is to get the best card and hold onto it.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398569</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399011</id>
	<title>Re:Who CARES about SLI?</title>
	<author>Anonymous</author>
	<datestamp>1245430080000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>I just wanted to add here: AMD to a certain degree already has with the Radeon 4770. I have one of these in a system with an overclocked C2D E4300, Dual Channel DDR2, and an Intel G31 motherboard. The whole rig prolly cost me ~400 bucks (Maybe 450 given that the CPU and it's prior mobo cost me ~ 3 years ago), and runs anything except Vista and a few badly written games at 1360x768 with 4x AA and most visual settings maxxed with almost no hiccups.</p><p>And more importantly, do you know what the total wattage for it is? *200* watts at full load (idle has it between 100-120), with some cheap no-name 450W PS, cheap noname case, 50 dollar retail Gigabyte motherboard, and an obsolete processor overclocked to 3ghz. This is almost the pinnacle of inefficient components, sans the videocard, and none of it broke the bank.</p><p>Honestly once you factor in peripherals, spare controllers, etc nowadays, a modern console will cost you MORE than a PC, nevermind the retail prices on games (59.99... HAHAHA, 49.99 release day or 69.99 Collector's, and I RARELY buy a game that's not retailing for 35 bucks or less.)</p><p>Just my 2 cents, YMMV</p></htmltext>
<tokenext>I just wanted to add here : AMD to a certain degree already has with the Radeon 4770 .
I have one of these in a system with an overclocked C2D E4300 , Dual Channel DDR2 , and an Intel G31 motherboard .
The whole rig prolly cost me ~ 400 bucks ( Maybe 450 given that the CPU and it 's prior mobo cost me ~ 3 years ago ) , and runs anything except Vista and a few badly written games at 1360x768 with 4x AA and most visual settings maxxed with almost no hiccups.And more importantly , do you know what the total wattage for it is ?
* 200 * watts at full load ( idle has it between 100-120 ) , with some cheap no-name 450W PS , cheap noname case , 50 dollar retail Gigabyte motherboard , and an obsolete processor overclocked to 3ghz .
This is almost the pinnacle of inefficient components , sans the videocard , and none of it broke the bank.Honestly once you factor in peripherals , spare controllers , etc nowadays , a modern console will cost you MORE than a PC , nevermind the retail prices on games ( 59.99... HAHAHA , 49.99 release day or 69.99 Collector 's , and I RARELY buy a game that 's not retailing for 35 bucks or less .
) Just my 2 cents , YMMV</tokentext>
<sentencetext>I just wanted to add here: AMD to a certain degree already has with the Radeon 4770.
I have one of these in a system with an overclocked C2D E4300, Dual Channel DDR2, and an Intel G31 motherboard.
The whole rig prolly cost me ~400 bucks (Maybe 450 given that the CPU and it's prior mobo cost me ~ 3 years ago), and runs anything except Vista and a few badly written games at 1360x768 with 4x AA and most visual settings maxxed with almost no hiccups.And more importantly, do you know what the total wattage for it is?
*200* watts at full load (idle has it between 100-120), with some cheap no-name 450W PS, cheap noname case, 50 dollar retail Gigabyte motherboard, and an obsolete processor overclocked to 3ghz.
This is almost the pinnacle of inefficient components, sans the videocard, and none of it broke the bank.Honestly once you factor in peripherals, spare controllers, etc nowadays, a modern console will cost you MORE than a PC, nevermind the retail prices on games (59.99... HAHAHA, 49.99 release day or 69.99 Collector's, and I RARELY buy a game that's not retailing for 35 bucks or less.
)Just my 2 cents, YMMV</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398569</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398561</id>
	<title>Re:amd</title>
	<author>Anonymous</author>
	<datestamp>1245424920000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext>Good lord.  The end of AMD started about 3 years ago.  Where have you been?<br> <br>

This has got to be at least the middle of the end.</htmltext>
<tokenext>Good lord .
The end of AMD started about 3 years ago .
Where have you been ?
This has got to be at least the middle of the end .</tokentext>
<sentencetext>Good lord.
The end of AMD started about 3 years ago.
Where have you been?
This has got to be at least the middle of the end.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398491</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28402547</id>
	<title>Re:Who CARES about SLI?</title>
	<author>jbevren</author>
	<datestamp>1245519660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I'll have to agree here.  I got a pair of matched cards for SLI use, and after being dismayed by the lack of apparent improvement I simply disabled SLI and use the second card for an additional monitor while playing my favorite game.</p></htmltext>
<tokenext>I 'll have to agree here .
I got a pair of matched cards for SLI use , and after being dismayed by the lack of apparent improvement I simply disabled SLI and use the second card for an additional monitor while playing my favorite game .</tokentext>
<sentencetext>I'll have to agree here.
I got a pair of matched cards for SLI use, and after being dismayed by the lack of apparent improvement I simply disabled SLI and use the second card for an additional monitor while playing my favorite game.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398569</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28400059</id>
	<title>Re:Single card, Dual GPU</title>
	<author>TranceThrust</author>
	<datestamp>1245488760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Two dual GPUs in SLI/Crossfire? Maybe?</htmltext>
<tokenext>Two dual GPUs in SLI/Crossfire ?
Maybe ?</tokentext>
<sentencetext>Two dual GPUs in SLI/Crossfire?
Maybe?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399291</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398719</id>
	<title>Maybe, Maybe Not</title>
	<author>Anonymous</author>
	<datestamp>1245426720000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>I don't think this is entirely nVidia's doing. It is in AMD's best interest to push ATI cards and Crossfire since they own the company. This can be seen with the recent "Dragon" platform - pushing AMD Phenom CPU, AMD 790GX/FX chipset and ATI 4750/4950 graphics cards as a single solution.</p></htmltext>
<tokenext>I do n't think this is entirely nVidia 's doing .
It is in AMD 's best interest to push ATI cards and Crossfire since they own the company .
This can be seen with the recent " Dragon " platform - pushing AMD Phenom CPU , AMD 790GX/FX chipset and ATI 4750/4950 graphics cards as a single solution .</tokentext>
<sentencetext>I don't think this is entirely nVidia's doing.
It is in AMD's best interest to push ATI cards and Crossfire since they own the company.
This can be seen with the recent "Dragon" platform - pushing AMD Phenom CPU, AMD 790GX/FX chipset and ATI 4750/4950 graphics cards as a single solution.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28400213</id>
	<title>Re:I don't know but...</title>
	<author>ciroknight</author>
	<datestamp>1245491460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>This is pure conjecture, but to me it seemed as if when AMD and ATI became one team and Nvidia and Intel became the other, that it would make sense for each one to offer incentives (read: threats) so that their partner would not bend over for the competition. So its not like its completely up to Nvidia to start improving their standing with AMD because of pressure from Intel. If that made any sense, then I'll drink a couple more beers before posting next time. Out</p></div><p>nVidia has made it quite clear on many occasions that they are not team players. They don't care about anyone else except themselves, will constantly put fault on anyone else that it can, and only acts in interest of protecting its own 'precious' IP. Not that Intel is any better on that last part, though.
<br> <br>
nVidia and Intel aren't a team. They're competitors. DAAMIT is only starting to piss of nVidia more because they can actually push a whole platform (CPU + Chipset + Graphics), something Intel's had since its birth, and something nVidia has desperately wanted but never been able to pull off (with attempt after failed attempt using ARM chips, Intel chips, AMD's CPUs, chipsets, etc).
<br> <br>
If anything, Intel and AMD have all of the incentive in the world to simply stop caring about nVidia and move to push them out of the market entirely (see Larrabee and AMD's similar Compute-GPU projects). It's going to get harder for nVidia to push graphics chips if everyone in the market they're competing with has high-performance graphics on-die with their CPUs.</p></div>
	</htmltext>
<tokenext>This is pure conjecture , but to me it seemed as if when AMD and ATI became one team and Nvidia and Intel became the other , that it would make sense for each one to offer incentives ( read : threats ) so that their partner would not bend over for the competition .
So its not like its completely up to Nvidia to start improving their standing with AMD because of pressure from Intel .
If that made any sense , then I 'll drink a couple more beers before posting next time .
OutnVidia has made it quite clear on many occasions that they are not team players .
They do n't care about anyone else except themselves , will constantly put fault on anyone else that it can , and only acts in interest of protecting its own 'precious ' IP .
Not that Intel is any better on that last part , though .
nVidia and Intel are n't a team .
They 're competitors .
DAAMIT is only starting to piss of nVidia more because they can actually push a whole platform ( CPU + Chipset + Graphics ) , something Intel 's had since its birth , and something nVidia has desperately wanted but never been able to pull off ( with attempt after failed attempt using ARM chips , Intel chips , AMD 's CPUs , chipsets , etc ) .
If anything , Intel and AMD have all of the incentive in the world to simply stop caring about nVidia and move to push them out of the market entirely ( see Larrabee and AMD 's similar Compute-GPU projects ) .
It 's going to get harder for nVidia to push graphics chips if everyone in the market they 're competing with has high-performance graphics on-die with their CPUs .</tokentext>
<sentencetext>This is pure conjecture, but to me it seemed as if when AMD and ATI became one team and Nvidia and Intel became the other, that it would make sense for each one to offer incentives (read: threats) so that their partner would not bend over for the competition.
So its not like its completely up to Nvidia to start improving their standing with AMD because of pressure from Intel.
If that made any sense, then I'll drink a couple more beers before posting next time.
OutnVidia has made it quite clear on many occasions that they are not team players.
They don't care about anyone else except themselves, will constantly put fault on anyone else that it can, and only acts in interest of protecting its own 'precious' IP.
Not that Intel is any better on that last part, though.
nVidia and Intel aren't a team.
They're competitors.
DAAMIT is only starting to piss of nVidia more because they can actually push a whole platform (CPU + Chipset + Graphics), something Intel's had since its birth, and something nVidia has desperately wanted but never been able to pull off (with attempt after failed attempt using ARM chips, Intel chips, AMD's CPUs, chipsets, etc).
If anything, Intel and AMD have all of the incentive in the world to simply stop caring about nVidia and move to push them out of the market entirely (see Larrabee and AMD's similar Compute-GPU projects).
It's going to get harder for nVidia to push graphics chips if everyone in the market they're competing with has high-performance graphics on-die with their CPUs.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398489</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398949</id>
	<title>Re:I don't know but...</title>
	<author>Anonymous</author>
	<datestamp>1245429360000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>That's not really an accurate portrayal of what's going on.  In reality it's more like, Intel is against the CPU side of AMD, in a semi-cordial relationship with the graphics side of AMD (ATI) and swatting at Nvidia like an annoying bug... which is all that Nvidia is compared to Intel despite Jen-Hsun Huang's deluded sense of grandeur.<br>Remember that Intel has supported ATI's crossfire configuration natively for a long time, and this support continues into the high-end X58 chipsets making Crossfire a very easy solution to implement.  SLI on the other hand is either done through a dodgy "certification" of motherboard BIOS's by Nvidia, or by an actual bridge chip which has to be added to the motherboard simply to do SLI.</p><p>Larrabee will change many things, especially bringing Intel into competition for high end graphics.  Frankly, I can't wait because it will mean that a fully documented architecture (vectorized x86) with pre-existing compiler support will finally be available.  Linux stands to gain as the biggest beneficiary since getting graphics and general-purpose software running on Larrabee won't require the black-box drivers that NVidia and ATI supply, or the "documentation" that ATI dumps out that takes 2 years for open source developers to get even half working.</p></htmltext>
<tokenext>That 's not really an accurate portrayal of what 's going on .
In reality it 's more like , Intel is against the CPU side of AMD , in a semi-cordial relationship with the graphics side of AMD ( ATI ) and swatting at Nvidia like an annoying bug... which is all that Nvidia is compared to Intel despite Jen-Hsun Huang 's deluded sense of grandeur.Remember that Intel has supported ATI 's crossfire configuration natively for a long time , and this support continues into the high-end X58 chipsets making Crossfire a very easy solution to implement .
SLI on the other hand is either done through a dodgy " certification " of motherboard BIOS 's by Nvidia , or by an actual bridge chip which has to be added to the motherboard simply to do SLI.Larrabee will change many things , especially bringing Intel into competition for high end graphics .
Frankly , I ca n't wait because it will mean that a fully documented architecture ( vectorized x86 ) with pre-existing compiler support will finally be available .
Linux stands to gain as the biggest beneficiary since getting graphics and general-purpose software running on Larrabee wo n't require the black-box drivers that NVidia and ATI supply , or the " documentation " that ATI dumps out that takes 2 years for open source developers to get even half working .</tokentext>
<sentencetext>That's not really an accurate portrayal of what's going on.
In reality it's more like, Intel is against the CPU side of AMD, in a semi-cordial relationship with the graphics side of AMD (ATI) and swatting at Nvidia like an annoying bug... which is all that Nvidia is compared to Intel despite Jen-Hsun Huang's deluded sense of grandeur.Remember that Intel has supported ATI's crossfire configuration natively for a long time, and this support continues into the high-end X58 chipsets making Crossfire a very easy solution to implement.
SLI on the other hand is either done through a dodgy "certification" of motherboard BIOS's by Nvidia, or by an actual bridge chip which has to be added to the motherboard simply to do SLI.Larrabee will change many things, especially bringing Intel into competition for high end graphics.
Frankly, I can't wait because it will mean that a fully documented architecture (vectorized x86) with pre-existing compiler support will finally be available.
Linux stands to gain as the biggest beneficiary since getting graphics and general-purpose software running on Larrabee won't require the black-box drivers that NVidia and ATI supply, or the "documentation" that ATI dumps out that takes 2 years for open source developers to get even half working.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398489</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398881</id>
	<title>Re:Who cares?</title>
	<author>Anonymous</author>
	<datestamp>1245428580000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p><div class="quote"><p>Dual GPU solutions are so pointless, a waste of money for little performance gain, that doesn't even work in some games.</p></div><p>Think OpenCL. I could careless about Streams or CUDA. But I do care about OpenCL/OpenGL and the Engineering worlds. Games will get it sooner rather than later why OpenCL will thrive.</p></div>
	</htmltext>
<tokenext>Dual GPU solutions are so pointless , a waste of money for little performance gain , that does n't even work in some games.Think OpenCL .
I could careless about Streams or CUDA .
But I do care about OpenCL/OpenGL and the Engineering worlds .
Games will get it sooner rather than later why OpenCL will thrive .</tokentext>
<sentencetext>Dual GPU solutions are so pointless, a waste of money for little performance gain, that doesn't even work in some games.Think OpenCL.
I could careless about Streams or CUDA.
But I do care about OpenCL/OpenGL and the Engineering worlds.
Games will get it sooner rather than later why OpenCL will thrive.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398539</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28400509</id>
	<title>Re:isn't sli just bs tech designed to sell more ca</title>
	<author>Aladrin</author>
	<datestamp>1245496080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>And who buys AMD?  People looking to get better bang for the buck.  In other words, people who are unlikely to double the cost of the video card for only 50\% more performance.</p><p>While I think this is a silly move by nVidia (it makes them look bad to their customer base), it probably isn't nearly as dumb a move as it looks at first glance.  They probably have a pretty good idea of what portion of their customers use AMD and SLI currently, and it's probably pretty low.</p></htmltext>
<tokenext>And who buys AMD ?
People looking to get better bang for the buck .
In other words , people who are unlikely to double the cost of the video card for only 50 \ % more performance.While I think this is a silly move by nVidia ( it makes them look bad to their customer base ) , it probably is n't nearly as dumb a move as it looks at first glance .
They probably have a pretty good idea of what portion of their customers use AMD and SLI currently , and it 's probably pretty low .</tokentext>
<sentencetext>And who buys AMD?
People looking to get better bang for the buck.
In other words, people who are unlikely to double the cost of the video card for only 50\% more performance.While I think this is a silly move by nVidia (it makes them look bad to their customer base), it probably isn't nearly as dumb a move as it looks at first glance.
They probably have a pretty good idea of what portion of their customers use AMD and SLI currently, and it's probably pretty low.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399173</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398511</id>
	<title>What's the news?</title>
	<author>FutureDomain</author>
	<datestamp>1245424440000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext>NVIDIA tries to jinx AMD, but ends up jinxing themselves. This has been tried throughout the ages and often ends up at the same result.&lt;br<nobr> <wbr></nobr>/&gt;<br>Move on, nothing to see here.</htmltext>
<tokenext>NVIDIA tries to jinx AMD , but ends up jinxing themselves .
This has been tried throughout the ages and often ends up at the same result .
/ &gt; Move on , nothing to see here .</tokentext>
<sentencetext>NVIDIA tries to jinx AMD, but ends up jinxing themselves.
This has been tried throughout the ages and often ends up at the same result.
/&gt;Move on, nothing to see here.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399327</id>
	<title>Re:I don't know but...</title>
	<author>Anonymous</author>
	<datestamp>1245434160000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>In case you didn't notice, AMD released documents for ATI chips quite some time ago. You now have at least three driver choices with ATI and Xorg based on the real specs, and at least two of them are open sourced. Welcome to yesterday.</p></htmltext>
<tokenext>In case you did n't notice , AMD released documents for ATI chips quite some time ago .
You now have at least three driver choices with ATI and Xorg based on the real specs , and at least two of them are open sourced .
Welcome to yesterday .</tokentext>
<sentencetext>In case you didn't notice, AMD released documents for ATI chips quite some time ago.
You now have at least three driver choices with ATI and Xorg based on the real specs, and at least two of them are open sourced.
Welcome to yesterday.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398949</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28405511</id>
	<title>Re:isn't sli just bs tech designed to sell more ca</title>
	<author>ShakaUVM</author>
	<datestamp>1245501000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>&gt;&gt;How many people seriously drop the coin to do this?</p><p>My motherboard has SLI support (I bought it in December 2004, on the off chance the numbers would make sense in the future.) But when it came time to replace my 6800, it make more sense to buy a 7900 (which was like 10x faster) rather than a second 6800, which probably would have entailed needing a PSU upgrade as well.</p><p>When it came time to replace my 7900, it made more sense to get an 8800 than a second 7900. When it came time to replace the 8800, it made more sense to get a 200-series.</p><p>At some point, I guess, I'll need to get a new motherboard (though I can run all games just fine, with my CPU upgraded about 2 years back to a 4800+ X2), but since I can run all games at a decent FPU, why bother?</p><p>But when I do, I might get SLI support on it anyway. Future proofing.</p></htmltext>
<tokenext>&gt; &gt; How many people seriously drop the coin to do this ? My motherboard has SLI support ( I bought it in December 2004 , on the off chance the numbers would make sense in the future .
) But when it came time to replace my 6800 , it make more sense to buy a 7900 ( which was like 10x faster ) rather than a second 6800 , which probably would have entailed needing a PSU upgrade as well.When it came time to replace my 7900 , it made more sense to get an 8800 than a second 7900 .
When it came time to replace the 8800 , it made more sense to get a 200-series.At some point , I guess , I 'll need to get a new motherboard ( though I can run all games just fine , with my CPU upgraded about 2 years back to a 4800 + X2 ) , but since I can run all games at a decent FPU , why bother ? But when I do , I might get SLI support on it anyway .
Future proofing .</tokentext>
<sentencetext>&gt;&gt;How many people seriously drop the coin to do this?My motherboard has SLI support (I bought it in December 2004, on the off chance the numbers would make sense in the future.
) But when it came time to replace my 6800, it make more sense to buy a 7900 (which was like 10x faster) rather than a second 6800, which probably would have entailed needing a PSU upgrade as well.When it came time to replace my 7900, it made more sense to get an 8800 than a second 7900.
When it came time to replace the 8800, it made more sense to get a 200-series.At some point, I guess, I'll need to get a new motherboard (though I can run all games just fine, with my CPU upgraded about 2 years back to a 4800+ X2), but since I can run all games at a decent FPU, why bother?But when I do, I might get SLI support on it anyway.
Future proofing.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399173</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28408055</id>
	<title>Re:Who cares?</title>
	<author>hardwarefreak</author>
	<datestamp>1245522960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Dual GPU solutions are so pointless, a waste of money for little performance gain, that doesn't even work in some games.</p></div><p>This wasn't always the case, although back then the marketing term GPU hadn't been invented yet.  I have a pair of 12MB 3Dfx branded Voodoo2 cards (post STB acquisition), and they doubled the performance of every game I played.  There was no overhead loss of any kind.  The performance scaling was performed almost entirely in hardware, with little driver support needed.  Remember all the benchmarks at Anand Tech et al that showed perfect 2x scaling for the Voodoo2 SLI setup?</p><p>The main reason for this is that geometry processing was still mostly handled by the host CPU back then.  Once geometry was moved into the graphics chip (GPU), the original scan line interleave became impossible.  Or, I should say, to do so would require idling one GPU's geometry unit altogether, and only utilizing its raster units.  The main advantage of a GPU *IS* the build in geometry unit--everything on one chip, shorter communications paths, greater bandwidth between units and memory, etc.</p><p>Does anyone remember the Obsidian Alchemy cards?  They put Voodoo2 SLI on a single board.  In addition to this 'consumer' card, they had a military/government product which paired 4 of these boards into a Pentium II system for a total of 8 Voodoo2s.  All of them were daisy chained together, and had external genlock/framelock F connectors to drive synchronized scene output on a set of 4 displays for flight/battle tank/ship/helicopter simulations.  They also had the ability to drive a single display at 2048x1536, IIRC, with full 8x FSAA, albeit at 16bpp.</p><p>My point, after all that text above, is that graphics is, and always has been, extremely parallel in nature.  If the consumer market truly demands scalable graphics, from the standpoint of slapping more and more cards in a system and getting linear scalability, then the GPU 'industry' is going to have to go back to the scan line interleave model and put geometry processing back on the host CPU.  This will require a driver and pipeline rewrite across the industry, as all the APIs (DirectX and OpenGL) have been built up since around 1998 for integrated GPUs.</p><p>The reason we're where we are is chip production cost.  It's cost, cost, cost.  Manufacturers need to minimize the number of unique chips they produce due to cost.  That's why what I write below will likely never happen...</p><p>The other 'option' is for the GPU guys to make 2 separate chips, such as Intergraph and 3Dlabs did back in the day, and 3Dfx did with their final product, the Voodoo5 series.  They had a dedicated geometry chip, and dedicated raster chips, decoupling them, vs the nVidia design, so as to allow multiple raster chips for greater performance.  (Back then the APIs and the games were still primarily pixel bound, not geometry bound).  This was done, albeit, on one board.</p><p>However, it would be possible, and possibly very advantageous, to create one board with a geometry chip, and a board with the raster chips, with custom cabled interconnects between the cards, that could handle extremely high bandwidth.  In essence, we're taking the old SLI model, but offloading the geometry from the host CPU to a dedicated chip, just like we do today, but we're decoupling the raster function from the geometry at the physical hardware level.  Today it's all on the single chip.</p><p>Geometry processing isn't inherently parallel in nature.  It is mostly serial, defining the 3D scene, one polygon after another, and they must be in order, lest the scene be corrupted.  This is why current "SLI" schemes fail to deliver anywhere close to linear speedup.  They expend more host CPU resources crunching driver code trying to divvy up the geometry load between two GPUs than they do actually dividing the total graphics processing load across two GPUs--both geometry and raster.</p><p>A product based on the above architecture would kick the dog shit out of any "SLI" solution on the market today.  The problem is no one is going to build it because it has multiple different parts, substantially driving up costs.  With the current "SLI" model, they build just a single chip and board design, then slap a couple or few together and try to get the driver to sort out the parallelization--never gonna work well.</p><p>Also, such a multi card solution would be balked at by any user who just wants a single board solution.  And, for an SLI configuration, you'd need a minimum of 3 cards.  On the plus side, each card would draw far less juice and generate less heat since we've divvied up the transistor load over two different chips, so each board should be doable in a single slot solution, not a double wide configuration like many monsters we see today.  Thus, 3 boards would once again fit in only 3 slots.</p><p>This architecture is the best solution if you actually want the scalability the GPU makes are charging you for.  As I said, it will likely never happen again, because the GPU makes are more concerned about their bottom line (profits) than they are about keeping their performance promises to their most dearly paying customers.</p><p>3Dfx was on the right track.  They'd perfected SLI, and it scaled 100\%.  3Dfx truly focused on the high end of performance, and delivered it to their customers.  However, the number of such customers was small, and the hardware cost was high.  Competitors came along with cheaper Jack-of-all-trades cards they could sell up and down the price-performance range with a single GPU chip, sorting out defective ones to be sold as the bottom of the range.  These were ATI and nVidia, primarily nVidia.  3Dfx went out of business.  Not because they had an inferior product, far from it--it was superior.  But, the costs were too high, and they could no longer compete across the entire range.</p><p>In summary, if you want scaling, you're going to have to wait for a bold new company to jump into the market with a multi chip multi board product and service ONLY the very high ($$) end of it, and you will have to pay dearly for the products, even more so than the robbery prices of the nVidia cards of today.</p></div>
	</htmltext>
<tokenext>Dual GPU solutions are so pointless , a waste of money for little performance gain , that does n't even work in some games.This was n't always the case , although back then the marketing term GPU had n't been invented yet .
I have a pair of 12MB 3Dfx branded Voodoo2 cards ( post STB acquisition ) , and they doubled the performance of every game I played .
There was no overhead loss of any kind .
The performance scaling was performed almost entirely in hardware , with little driver support needed .
Remember all the benchmarks at Anand Tech et al that showed perfect 2x scaling for the Voodoo2 SLI setup ? The main reason for this is that geometry processing was still mostly handled by the host CPU back then .
Once geometry was moved into the graphics chip ( GPU ) , the original scan line interleave became impossible .
Or , I should say , to do so would require idling one GPU 's geometry unit altogether , and only utilizing its raster units .
The main advantage of a GPU * IS * the build in geometry unit--everything on one chip , shorter communications paths , greater bandwidth between units and memory , etc.Does anyone remember the Obsidian Alchemy cards ?
They put Voodoo2 SLI on a single board .
In addition to this 'consumer ' card , they had a military/government product which paired 4 of these boards into a Pentium II system for a total of 8 Voodoo2s .
All of them were daisy chained together , and had external genlock/framelock F connectors to drive synchronized scene output on a set of 4 displays for flight/battle tank/ship/helicopter simulations .
They also had the ability to drive a single display at 2048x1536 , IIRC , with full 8x FSAA , albeit at 16bpp.My point , after all that text above , is that graphics is , and always has been , extremely parallel in nature .
If the consumer market truly demands scalable graphics , from the standpoint of slapping more and more cards in a system and getting linear scalability , then the GPU 'industry ' is going to have to go back to the scan line interleave model and put geometry processing back on the host CPU .
This will require a driver and pipeline rewrite across the industry , as all the APIs ( DirectX and OpenGL ) have been built up since around 1998 for integrated GPUs.The reason we 're where we are is chip production cost .
It 's cost , cost , cost .
Manufacturers need to minimize the number of unique chips they produce due to cost .
That 's why what I write below will likely never happen...The other 'option ' is for the GPU guys to make 2 separate chips , such as Intergraph and 3Dlabs did back in the day , and 3Dfx did with their final product , the Voodoo5 series .
They had a dedicated geometry chip , and dedicated raster chips , decoupling them , vs the nVidia design , so as to allow multiple raster chips for greater performance .
( Back then the APIs and the games were still primarily pixel bound , not geometry bound ) .
This was done , albeit , on one board.However , it would be possible , and possibly very advantageous , to create one board with a geometry chip , and a board with the raster chips , with custom cabled interconnects between the cards , that could handle extremely high bandwidth .
In essence , we 're taking the old SLI model , but offloading the geometry from the host CPU to a dedicated chip , just like we do today , but we 're decoupling the raster function from the geometry at the physical hardware level .
Today it 's all on the single chip.Geometry processing is n't inherently parallel in nature .
It is mostly serial , defining the 3D scene , one polygon after another , and they must be in order , lest the scene be corrupted .
This is why current " SLI " schemes fail to deliver anywhere close to linear speedup .
They expend more host CPU resources crunching driver code trying to divvy up the geometry load between two GPUs than they do actually dividing the total graphics processing load across two GPUs--both geometry and raster.A product based on the above architecture would kick the dog shit out of any " SLI " solution on the market today .
The problem is no one is going to build it because it has multiple different parts , substantially driving up costs .
With the current " SLI " model , they build just a single chip and board design , then slap a couple or few together and try to get the driver to sort out the parallelization--never gon na work well.Also , such a multi card solution would be balked at by any user who just wants a single board solution .
And , for an SLI configuration , you 'd need a minimum of 3 cards .
On the plus side , each card would draw far less juice and generate less heat since we 've divvied up the transistor load over two different chips , so each board should be doable in a single slot solution , not a double wide configuration like many monsters we see today .
Thus , 3 boards would once again fit in only 3 slots.This architecture is the best solution if you actually want the scalability the GPU makes are charging you for .
As I said , it will likely never happen again , because the GPU makes are more concerned about their bottom line ( profits ) than they are about keeping their performance promises to their most dearly paying customers.3Dfx was on the right track .
They 'd perfected SLI , and it scaled 100 \ % .
3Dfx truly focused on the high end of performance , and delivered it to their customers .
However , the number of such customers was small , and the hardware cost was high .
Competitors came along with cheaper Jack-of-all-trades cards they could sell up and down the price-performance range with a single GPU chip , sorting out defective ones to be sold as the bottom of the range .
These were ATI and nVidia , primarily nVidia .
3Dfx went out of business .
Not because they had an inferior product , far from it--it was superior .
But , the costs were too high , and they could no longer compete across the entire range.In summary , if you want scaling , you 're going to have to wait for a bold new company to jump into the market with a multi chip multi board product and service ONLY the very high ( $ $ ) end of it , and you will have to pay dearly for the products , even more so than the robbery prices of the nVidia cards of today .</tokentext>
<sentencetext>Dual GPU solutions are so pointless, a waste of money for little performance gain, that doesn't even work in some games.This wasn't always the case, although back then the marketing term GPU hadn't been invented yet.
I have a pair of 12MB 3Dfx branded Voodoo2 cards (post STB acquisition), and they doubled the performance of every game I played.
There was no overhead loss of any kind.
The performance scaling was performed almost entirely in hardware, with little driver support needed.
Remember all the benchmarks at Anand Tech et al that showed perfect 2x scaling for the Voodoo2 SLI setup?The main reason for this is that geometry processing was still mostly handled by the host CPU back then.
Once geometry was moved into the graphics chip (GPU), the original scan line interleave became impossible.
Or, I should say, to do so would require idling one GPU's geometry unit altogether, and only utilizing its raster units.
The main advantage of a GPU *IS* the build in geometry unit--everything on one chip, shorter communications paths, greater bandwidth between units and memory, etc.Does anyone remember the Obsidian Alchemy cards?
They put Voodoo2 SLI on a single board.
In addition to this 'consumer' card, they had a military/government product which paired 4 of these boards into a Pentium II system for a total of 8 Voodoo2s.
All of them were daisy chained together, and had external genlock/framelock F connectors to drive synchronized scene output on a set of 4 displays for flight/battle tank/ship/helicopter simulations.
They also had the ability to drive a single display at 2048x1536, IIRC, with full 8x FSAA, albeit at 16bpp.My point, after all that text above, is that graphics is, and always has been, extremely parallel in nature.
If the consumer market truly demands scalable graphics, from the standpoint of slapping more and more cards in a system and getting linear scalability, then the GPU 'industry' is going to have to go back to the scan line interleave model and put geometry processing back on the host CPU.
This will require a driver and pipeline rewrite across the industry, as all the APIs (DirectX and OpenGL) have been built up since around 1998 for integrated GPUs.The reason we're where we are is chip production cost.
It's cost, cost, cost.
Manufacturers need to minimize the number of unique chips they produce due to cost.
That's why what I write below will likely never happen...The other 'option' is for the GPU guys to make 2 separate chips, such as Intergraph and 3Dlabs did back in the day, and 3Dfx did with their final product, the Voodoo5 series.
They had a dedicated geometry chip, and dedicated raster chips, decoupling them, vs the nVidia design, so as to allow multiple raster chips for greater performance.
(Back then the APIs and the games were still primarily pixel bound, not geometry bound).
This was done, albeit, on one board.However, it would be possible, and possibly very advantageous, to create one board with a geometry chip, and a board with the raster chips, with custom cabled interconnects between the cards, that could handle extremely high bandwidth.
In essence, we're taking the old SLI model, but offloading the geometry from the host CPU to a dedicated chip, just like we do today, but we're decoupling the raster function from the geometry at the physical hardware level.
Today it's all on the single chip.Geometry processing isn't inherently parallel in nature.
It is mostly serial, defining the 3D scene, one polygon after another, and they must be in order, lest the scene be corrupted.
This is why current "SLI" schemes fail to deliver anywhere close to linear speedup.
They expend more host CPU resources crunching driver code trying to divvy up the geometry load between two GPUs than they do actually dividing the total graphics processing load across two GPUs--both geometry and raster.A product based on the above architecture would kick the dog shit out of any "SLI" solution on the market today.
The problem is no one is going to build it because it has multiple different parts, substantially driving up costs.
With the current "SLI" model, they build just a single chip and board design, then slap a couple or few together and try to get the driver to sort out the parallelization--never gonna work well.Also, such a multi card solution would be balked at by any user who just wants a single board solution.
And, for an SLI configuration, you'd need a minimum of 3 cards.
On the plus side, each card would draw far less juice and generate less heat since we've divvied up the transistor load over two different chips, so each board should be doable in a single slot solution, not a double wide configuration like many monsters we see today.
Thus, 3 boards would once again fit in only 3 slots.This architecture is the best solution if you actually want the scalability the GPU makes are charging you for.
As I said, it will likely never happen again, because the GPU makes are more concerned about their bottom line (profits) than they are about keeping their performance promises to their most dearly paying customers.3Dfx was on the right track.
They'd perfected SLI, and it scaled 100\%.
3Dfx truly focused on the high end of performance, and delivered it to their customers.
However, the number of such customers was small, and the hardware cost was high.
Competitors came along with cheaper Jack-of-all-trades cards they could sell up and down the price-performance range with a single GPU chip, sorting out defective ones to be sold as the bottom of the range.
These were ATI and nVidia, primarily nVidia.
3Dfx went out of business.
Not because they had an inferior product, far from it--it was superior.
But, the costs were too high, and they could no longer compete across the entire range.In summary, if you want scaling, you're going to have to wait for a bold new company to jump into the market with a multi chip multi board product and service ONLY the very high ($$) end of it, and you will have to pay dearly for the products, even more so than the robbery prices of the nVidia cards of today.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398539</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28401321</id>
	<title>Re:Single card, Dual GPU</title>
	<author>Anonymous</author>
	<datestamp>1245508560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>You need power and bandwidth to make one of those babies work. You don't get to power and supply enough bandwidth to work multiple cards from a single PCI slot.</htmltext>
<tokenext>You need power and bandwidth to make one of those babies work .
You do n't get to power and supply enough bandwidth to work multiple cards from a single PCI slot .</tokentext>
<sentencetext>You need power and bandwidth to make one of those babies work.
You don't get to power and supply enough bandwidth to work multiple cards from a single PCI slot.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399291</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28419081</id>
	<title>Re:isn't sli just bs tech designed to sell more ca</title>
	<author>Nyder</author>
	<datestamp>1245669240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I have 2x285 in sli.</p><p>Sure, i could of just bought 1, but I didn't.   I bought a 1920x1200 lcd monitor.</p><p>I like playing with everything maxed out, (including aa) with 60 fps.</p><p>only a couple of games dips below 60 fps, like crysis and NFS Undercover.</p><p>Could of I gone with a smaller lcd and one card and still have a great gaming experience?</p><p>Yep, I could of.  but i didn't.</p><p>so you that don't do sli, or get sli, or don't game, or are just stupid, understand this:<br>Don't hate because you don't have sli, can't afford sli, don't game, or just don't understand what i'm saying.</p><p>Oh, just so ya know, 1 card I get about 23fps per sec in crysis, with it dropping into the teens a bit.  I get about 43ish average with 2 cards.  Not quite double the performance, but close.</p><p>Seriously, i've seen more whining and misinformation about sli from peeps that don't have it.</p><p>Lots of smart gamers are buying the cheaper cards for sli and getting very acceptable performance for gaming.<br>Not to mention that thru cuda you have access to some nice processing power for various apps.  Alot of people buy cards to sli for folding farms.</p><p>Sli is not BS tech, it's decent technology that does what it claims.</p><p>Of course, my experience has been great with my setup, but it is like a heater.   Dual cards hitting 150+F each, cpu gets up to 125F.   I do have a better case I have to move it to, since summer is starting.</p></htmltext>
<tokenext>I have 2x285 in sli.Sure , i could of just bought 1 , but I did n't .
I bought a 1920x1200 lcd monitor.I like playing with everything maxed out , ( including aa ) with 60 fps.only a couple of games dips below 60 fps , like crysis and NFS Undercover.Could of I gone with a smaller lcd and one card and still have a great gaming experience ? Yep , I could of .
but i did n't.so you that do n't do sli , or get sli , or do n't game , or are just stupid , understand this : Do n't hate because you do n't have sli , ca n't afford sli , do n't game , or just do n't understand what i 'm saying.Oh , just so ya know , 1 card I get about 23fps per sec in crysis , with it dropping into the teens a bit .
I get about 43ish average with 2 cards .
Not quite double the performance , but close.Seriously , i 've seen more whining and misinformation about sli from peeps that do n't have it.Lots of smart gamers are buying the cheaper cards for sli and getting very acceptable performance for gaming.Not to mention that thru cuda you have access to some nice processing power for various apps .
Alot of people buy cards to sli for folding farms.Sli is not BS tech , it 's decent technology that does what it claims.Of course , my experience has been great with my setup , but it is like a heater .
Dual cards hitting 150 + F each , cpu gets up to 125F .
I do have a better case I have to move it to , since summer is starting .</tokentext>
<sentencetext>I have 2x285 in sli.Sure, i could of just bought 1, but I didn't.
I bought a 1920x1200 lcd monitor.I like playing with everything maxed out, (including aa) with 60 fps.only a couple of games dips below 60 fps, like crysis and NFS Undercover.Could of I gone with a smaller lcd and one card and still have a great gaming experience?Yep, I could of.
but i didn't.so you that don't do sli, or get sli, or don't game, or are just stupid, understand this:Don't hate because you don't have sli, can't afford sli, don't game, or just don't understand what i'm saying.Oh, just so ya know, 1 card I get about 23fps per sec in crysis, with it dropping into the teens a bit.
I get about 43ish average with 2 cards.
Not quite double the performance, but close.Seriously, i've seen more whining and misinformation about sli from peeps that don't have it.Lots of smart gamers are buying the cheaper cards for sli and getting very acceptable performance for gaming.Not to mention that thru cuda you have access to some nice processing power for various apps.
Alot of people buy cards to sli for folding farms.Sli is not BS tech, it's decent technology that does what it claims.Of course, my experience has been great with my setup, but it is like a heater.
Dual cards hitting 150+F each, cpu gets up to 125F.
I do have a better case I have to move it to, since summer is starting.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399173</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28401035</id>
	<title>Could it really be AMD abandoning NVIDIA?</title>
	<author>Anonymous</author>
	<datestamp>1245505080000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>As I think about it, one thing occurs to me.</p><p>ATI wasn't just a graphics company.  They make chipsets too.  "Well duh!" you may say to me but I think it's not coincidental.  I believe AMD wanted to bring one of the two chipset manufacturers in house so they could have better coupling between their processors and their chipsets.</p><p>With a chipset business in-house, AMD now has greater control over coordinating the release of processors and compatible chipsets.  I really think AMD believes they have no use for NVIDIA chipsets at all.</p><p>What does this mean for NVIDIA?  I have to believe that making chipsets for AMD processors is becoming more trouble for them than it's worth.  They're competing with AMD who is leveraging their combined process to come out with tightly integrated products.  That's a tough business model to fight against.  And abandoning SLI is just the first step in walking away from making AMD chipsets.  Consider also that NVIDIA is at least trying to make their own CPU and I have to wonder if they're not siphoning resources off their chipset unit with the eventual goal of closing down all chipset work for AMD processors.</p></htmltext>
<tokenext>As I think about it , one thing occurs to me.ATI was n't just a graphics company .
They make chipsets too .
" Well duh !
" you may say to me but I think it 's not coincidental .
I believe AMD wanted to bring one of the two chipset manufacturers in house so they could have better coupling between their processors and their chipsets.With a chipset business in-house , AMD now has greater control over coordinating the release of processors and compatible chipsets .
I really think AMD believes they have no use for NVIDIA chipsets at all.What does this mean for NVIDIA ?
I have to believe that making chipsets for AMD processors is becoming more trouble for them than it 's worth .
They 're competing with AMD who is leveraging their combined process to come out with tightly integrated products .
That 's a tough business model to fight against .
And abandoning SLI is just the first step in walking away from making AMD chipsets .
Consider also that NVIDIA is at least trying to make their own CPU and I have to wonder if they 're not siphoning resources off their chipset unit with the eventual goal of closing down all chipset work for AMD processors .</tokentext>
<sentencetext>As I think about it, one thing occurs to me.ATI wasn't just a graphics company.
They make chipsets too.
"Well duh!
" you may say to me but I think it's not coincidental.
I believe AMD wanted to bring one of the two chipset manufacturers in house so they could have better coupling between their processors and their chipsets.With a chipset business in-house, AMD now has greater control over coordinating the release of processors and compatible chipsets.
I really think AMD believes they have no use for NVIDIA chipsets at all.What does this mean for NVIDIA?
I have to believe that making chipsets for AMD processors is becoming more trouble for them than it's worth.
They're competing with AMD who is leveraging their combined process to come out with tightly integrated products.
That's a tough business model to fight against.
And abandoning SLI is just the first step in walking away from making AMD chipsets.
Consider also that NVIDIA is at least trying to make their own CPU and I have to wonder if they're not siphoning resources off their chipset unit with the eventual goal of closing down all chipset work for AMD processors.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398717</id>
	<title>Growing up</title>
	<author>ViennaLen</author>
	<datestamp>1245426660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>As we age, we realize that two completely different foods, peas and carrots, AMD and NVIDIA, can't be eaten together anymore as baby food mush.<br> <br>Most people who use SLI, namely gamers and workstation users, use Intel processors anyway. Intel has Core i7, with lower-end versions (i3, i5) coming out later this year and next year, and has the publicity power to appeal to the aforementioned parties. AMD just doesn't impress much anymore.</htmltext>
<tokenext>As we age , we realize that two completely different foods , peas and carrots , AMD and NVIDIA , ca n't be eaten together anymore as baby food mush .
Most people who use SLI , namely gamers and workstation users , use Intel processors anyway .
Intel has Core i7 , with lower-end versions ( i3 , i5 ) coming out later this year and next year , and has the publicity power to appeal to the aforementioned parties .
AMD just does n't impress much anymore .</tokentext>
<sentencetext>As we age, we realize that two completely different foods, peas and carrots, AMD and NVIDIA, can't be eaten together anymore as baby food mush.
Most people who use SLI, namely gamers and workstation users, use Intel processors anyway.
Intel has Core i7, with lower-end versions (i3, i5) coming out later this year and next year, and has the publicity power to appeal to the aforementioned parties.
AMD just doesn't impress much anymore.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28401853</id>
	<title>Re:Who cares?</title>
	<author>bbbl67</author>
	<datestamp>1245513060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>It's looking like dual graphics will be the default scenario pretty soon anyway. Not in the sense of two graphics cards as in SLI or Crossfire. But more like a combination of graphics cards and integrated graphics. Both Intel and AMD are working towards their combined CPU/GPU chips (Fusion, etc.). So in the not too distant future I can see all machines having a default integrated GPU built-in. Then high-end gamers will invariably add-in a high-end GPU. So why not make use of both resources? Instead of using either the integrated graphics or the add-in graphics, use both at the same time. It doesn't have to be for graphics alone, but it could also be used for OpenCL and DirectX 11's physics engine.</htmltext>
<tokenext>It 's looking like dual graphics will be the default scenario pretty soon anyway .
Not in the sense of two graphics cards as in SLI or Crossfire .
But more like a combination of graphics cards and integrated graphics .
Both Intel and AMD are working towards their combined CPU/GPU chips ( Fusion , etc. ) .
So in the not too distant future I can see all machines having a default integrated GPU built-in .
Then high-end gamers will invariably add-in a high-end GPU .
So why not make use of both resources ?
Instead of using either the integrated graphics or the add-in graphics , use both at the same time .
It does n't have to be for graphics alone , but it could also be used for OpenCL and DirectX 11 's physics engine .</tokentext>
<sentencetext>It's looking like dual graphics will be the default scenario pretty soon anyway.
Not in the sense of two graphics cards as in SLI or Crossfire.
But more like a combination of graphics cards and integrated graphics.
Both Intel and AMD are working towards their combined CPU/GPU chips (Fusion, etc.).
So in the not too distant future I can see all machines having a default integrated GPU built-in.
Then high-end gamers will invariably add-in a high-end GPU.
So why not make use of both resources?
Instead of using either the integrated graphics or the add-in graphics, use both at the same time.
It doesn't have to be for graphics alone, but it could also be used for OpenCL and DirectX 11's physics engine.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398881</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398491</id>
	<title>amd</title>
	<author>Anonymous</author>
	<datestamp>1245424260000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Beginning of the end?</p></htmltext>
<tokenext>Beginning of the end ?</tokentext>
<sentencetext>Beginning of the end?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398569</id>
	<title>Who CARES about SLI?</title>
	<author>PrescriptionWarning</author>
	<datestamp>1245424980000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext>The fact is that a very marginally small portion of people actually use more than one video card. And why should anyone really, when modern day consoles cost about the same amount as one would spend on a moderately high end processor + video card, why the hell would most people want to spend an extra 300 bucks or so to have an extra video card at only 25\% or less extra benefit in framerate?  Only the hardcore ones with the extra wallet is who.  As for me, I'm more than happy with my $1000 system with ONE video card, and I know its going to last me at least and extra year or two anyway.

<br> <br>

Anyway all I'm saying is AMD has the ability to tie in their own processor + GPU combo, plus let the consumer buy a separate GPU, thus getting their own "SLI". If they play their card right, they can just give the finger to NVIDIA and provide some real competition that this market really needs to prevent us all from paying $200-300 for a decent GPU these days.</htmltext>
<tokenext>The fact is that a very marginally small portion of people actually use more than one video card .
And why should anyone really , when modern day consoles cost about the same amount as one would spend on a moderately high end processor + video card , why the hell would most people want to spend an extra 300 bucks or so to have an extra video card at only 25 \ % or less extra benefit in framerate ?
Only the hardcore ones with the extra wallet is who .
As for me , I 'm more than happy with my $ 1000 system with ONE video card , and I know its going to last me at least and extra year or two anyway .
Anyway all I 'm saying is AMD has the ability to tie in their own processor + GPU combo , plus let the consumer buy a separate GPU , thus getting their own " SLI " .
If they play their card right , they can just give the finger to NVIDIA and provide some real competition that this market really needs to prevent us all from paying $ 200-300 for a decent GPU these days .</tokentext>
<sentencetext>The fact is that a very marginally small portion of people actually use more than one video card.
And why should anyone really, when modern day consoles cost about the same amount as one would spend on a moderately high end processor + video card, why the hell would most people want to spend an extra 300 bucks or so to have an extra video card at only 25\% or less extra benefit in framerate?
Only the hardcore ones with the extra wallet is who.
As for me, I'm more than happy with my $1000 system with ONE video card, and I know its going to last me at least and extra year or two anyway.
Anyway all I'm saying is AMD has the ability to tie in their own processor + GPU combo, plus let the consumer buy a separate GPU, thus getting their own "SLI".
If they play their card right, they can just give the finger to NVIDIA and provide some real competition that this market really needs to prevent us all from paying $200-300 for a decent GPU these days.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28407723</id>
	<title>Re:Well...</title>
	<author>electrosoccertux</author>
	<datestamp>1245519660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Not really, I'd encourage you to check them out again; since AMD bought them their drivers have been steadily improving. Used to be performance with games on ATI was hit or miss, but since the 3xxx generation it's been good across the board. This 4xxx generation of ATI/AMD cards has been beating Nvidia's offerings at each price point, no questions asked.</p><p>Next card purchase of mine will likely be ATI. 4850 for $85 after rebate ($15)? Heck yeah.</p></htmltext>
<tokenext>Not really , I 'd encourage you to check them out again ; since AMD bought them their drivers have been steadily improving .
Used to be performance with games on ATI was hit or miss , but since the 3xxx generation it 's been good across the board .
This 4xxx generation of ATI/AMD cards has been beating Nvidia 's offerings at each price point , no questions asked.Next card purchase of mine will likely be ATI .
4850 for $ 85 after rebate ( $ 15 ) ?
Heck yeah .</tokentext>
<sentencetext>Not really, I'd encourage you to check them out again; since AMD bought them their drivers have been steadily improving.
Used to be performance with games on ATI was hit or miss, but since the 3xxx generation it's been good across the board.
This 4xxx generation of ATI/AMD cards has been beating Nvidia's offerings at each price point, no questions asked.Next card purchase of mine will likely be ATI.
4850 for $85 after rebate ($15)?
Heck yeah.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398669</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398539</id>
	<title>Who cares?</title>
	<author>Winckle</author>
	<datestamp>1245424740000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>Dual GPU solutions are so pointless, a waste of money for little performance gain, that doesn't even work in some games.</p></htmltext>
<tokenext>Dual GPU solutions are so pointless , a waste of money for little performance gain , that does n't even work in some games .</tokentext>
<sentencetext>Dual GPU solutions are so pointless, a waste of money for little performance gain, that doesn't even work in some games.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398739</id>
	<title>Oh silly hardware companies..</title>
	<author>synthesizerpatel</author>
	<datestamp>1245426840000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>1</modscore>
	<htmltext><p>ATI &amp; AMD is just power-housing the craptitude under one roof.</p><p>ATI has and always will be a second rate hardware company, and a fall-flat-on-their-face failure at drivers. Crab all you want about NVIDIA but they got the goods and the business strategy that put them on top.</p><p>AMD's most important product to date has simply been the act of competing with Intel. Lying about how great their products were forced Intel to make products better than AMDs marketing BS.</p><p>When your competition says 'We're #1 at XYZ' you don't get more customers by putting out full page ads saying 'Our competitors are liars!'.. You either lie harder or make your product better than their hype.</p></htmltext>
<tokenext>ATI &amp; AMD is just power-housing the craptitude under one roof.ATI has and always will be a second rate hardware company , and a fall-flat-on-their-face failure at drivers .
Crab all you want about NVIDIA but they got the goods and the business strategy that put them on top.AMD 's most important product to date has simply been the act of competing with Intel .
Lying about how great their products were forced Intel to make products better than AMDs marketing BS.When your competition says 'We 're # 1 at XYZ ' you do n't get more customers by putting out full page ads saying 'Our competitors are liars ! '. .
You either lie harder or make your product better than their hype .</tokentext>
<sentencetext>ATI &amp; AMD is just power-housing the craptitude under one roof.ATI has and always will be a second rate hardware company, and a fall-flat-on-their-face failure at drivers.
Crab all you want about NVIDIA but they got the goods and the business strategy that put them on top.AMD's most important product to date has simply been the act of competing with Intel.
Lying about how great their products were forced Intel to make products better than AMDs marketing BS.When your competition says 'We're #1 at XYZ' you don't get more customers by putting out full page ads saying 'Our competitors are liars!'..
You either lie harder or make your product better than their hype.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398689</id>
	<title>Re:I don't know but...</title>
	<author>Anonymous</author>
	<datestamp>1245426360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>So its not like its completely up to Nvidia to start improving their standing with AMD because of pressure from Intel.</p></div><p>which, if it can be proven, would result in yet another anti-trust claim against intel.</p></div>
	</htmltext>
<tokenext>So its not like its completely up to Nvidia to start improving their standing with AMD because of pressure from Intel.which , if it can be proven , would result in yet another anti-trust claim against intel .</tokentext>
<sentencetext>So its not like its completely up to Nvidia to start improving their standing with AMD because of pressure from Intel.which, if it can be proven, would result in yet another anti-trust claim against intel.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398489</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28406917</id>
	<title>Re:I don't know but...</title>
	<author>Gel214th</author>
	<datestamp>1245511800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Isn't this exactly what Intel did and part of the reason they were charged for anti-trust and anti-monopoly violations?</p></htmltext>
<tokenext>Is n't this exactly what Intel did and part of the reason they were charged for anti-trust and anti-monopoly violations ?</tokentext>
<sentencetext>Isn't this exactly what Intel did and part of the reason they were charged for anti-trust and anti-monopoly violations?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398489</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398669</id>
	<title>Re:Well...</title>
	<author>Anonymous</author>
	<datestamp>1245426180000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>The ATI video cards have impressive hardware specs when comparing to Nvidia.  However, their drivers and their driver support is shit.</p></htmltext>
<tokenext>The ATI video cards have impressive hardware specs when comparing to Nvidia .
However , their drivers and their driver support is shit .</tokentext>
<sentencetext>The ATI video cards have impressive hardware specs when comparing to Nvidia.
However, their drivers and their driver support is shit.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398497</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28403861</id>
	<title>NVidia and AMD</title>
	<author>PhotoGuy</author>
	<datestamp>1245531000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>A bit OT, but I'm curious as to why the best deal for half decent motherboards around here seem to be NVidia chipsets and onboard graphics, and AMD processors...  Should AMD/ATI be cranking out chipsets that allow board makers to do better/faster/cheaper boards with combos from the same manufacturer??  Just seems odd.  All the PC's in my house (one Linux, one Hackintosh, a couple of Windows ones for the kids, are all NVidia/AMD setups, bought over the past few years.)</p></htmltext>
<tokenext>A bit OT , but I 'm curious as to why the best deal for half decent motherboards around here seem to be NVidia chipsets and onboard graphics , and AMD processors... Should AMD/ATI be cranking out chipsets that allow board makers to do better/faster/cheaper boards with combos from the same manufacturer ? ?
Just seems odd .
All the PC 's in my house ( one Linux , one Hackintosh , a couple of Windows ones for the kids , are all NVidia/AMD setups , bought over the past few years .
)</tokentext>
<sentencetext>A bit OT, but I'm curious as to why the best deal for half decent motherboards around here seem to be NVidia chipsets and onboard graphics, and AMD processors...  Should AMD/ATI be cranking out chipsets that allow board makers to do better/faster/cheaper boards with combos from the same manufacturer??
Just seems odd.
All the PC's in my house (one Linux, one Hackintosh, a couple of Windows ones for the kids, are all NVidia/AMD setups, bought over the past few years.
)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28403099</id>
	<title>Re:Who CARES about SLI?</title>
	<author>PitaBred</author>
	<datestamp>1245524220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>You do know that ATI already has Crossfire, and even CrossfireX, which will let you mix cards of different speeds and still get a benefit? They don't have to be matched cards?</htmltext>
<tokenext>You do know that ATI already has Crossfire , and even CrossfireX , which will let you mix cards of different speeds and still get a benefit ?
They do n't have to be matched cards ?</tokentext>
<sentencetext>You do know that ATI already has Crossfire, and even CrossfireX, which will let you mix cards of different speeds and still get a benefit?
They don't have to be matched cards?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398569</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399151</id>
	<title>Unimportant.</title>
	<author>Jartan</author>
	<datestamp>1245431700000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>Really the article makes it sound like Nvidia is abandoning AMD chipsets but it's just SLI support.   When they started making this decision it looked like AMD was totally dead in the enthusiast market.  Even die-hards were switching to Intel chips.   It seemed for a while there that the market for dual graphics cards on AMD was nearly dead.   Now that AMD has a good chip again Nvidia will probably be scrambling to get a new chipset out for enthusiasts.</p></htmltext>
<tokenext>Really the article makes it sound like Nvidia is abandoning AMD chipsets but it 's just SLI support .
When they started making this decision it looked like AMD was totally dead in the enthusiast market .
Even die-hards were switching to Intel chips .
It seemed for a while there that the market for dual graphics cards on AMD was nearly dead .
Now that AMD has a good chip again Nvidia will probably be scrambling to get a new chipset out for enthusiasts .</tokentext>
<sentencetext>Really the article makes it sound like Nvidia is abandoning AMD chipsets but it's just SLI support.
When they started making this decision it looked like AMD was totally dead in the enthusiast market.
Even die-hards were switching to Intel chips.
It seemed for a while there that the market for dual graphics cards on AMD was nearly dead.
Now that AMD has a good chip again Nvidia will probably be scrambling to get a new chipset out for enthusiasts.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399745</id>
	<title>damn it not again!</title>
	<author>Anonymous</author>
	<datestamp>1245440640000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Acer &gt; Asus... I always get those two confused<nobr> <wbr></nobr>:/</p></htmltext>
<tokenext>Acer &gt; Asus... I always get those two confused : /</tokentext>
<sentencetext>Acer &gt; Asus... I always get those two confused :/</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28400067</id>
	<title>Re:Who cares?</title>
	<author>Anonymous</author>
	<datestamp>1245488940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Agreed. SLI is useless. I've had SLI-capable boards for many years, always thinking "hey I can upgrade by adding a second card when it gets cheap". By the time I actually wanted to upgrade it was always cheaper to by a newly released card that offered more performance than 2 of the old ones.</p><p>Two up-to-date cards are pointless as it's expensive, consumes massive amounts of power and current games are developed for consoles with an even worse single graphics card.</p></htmltext>
<tokenext>Agreed .
SLI is useless .
I 've had SLI-capable boards for many years , always thinking " hey I can upgrade by adding a second card when it gets cheap " .
By the time I actually wanted to upgrade it was always cheaper to by a newly released card that offered more performance than 2 of the old ones.Two up-to-date cards are pointless as it 's expensive , consumes massive amounts of power and current games are developed for consoles with an even worse single graphics card .</tokentext>
<sentencetext>Agreed.
SLI is useless.
I've had SLI-capable boards for many years, always thinking "hey I can upgrade by adding a second card when it gets cheap".
By the time I actually wanted to upgrade it was always cheaper to by a newly released card that offered more performance than 2 of the old ones.Two up-to-date cards are pointless as it's expensive, consumes massive amounts of power and current games are developed for consoles with an even worse single graphics card.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398539</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28400639</id>
	<title>Re:Who CARES about SLI?</title>
	<author>Gregg M</author>
	<datestamp>1245497760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><i>why the hell would most people want to spend an extra 300 bucks or so to have an extra video card at only 25\% or less extra benefit in framerate?</i> <p>
SLI will give you better than 25\% improved performance. I hear the cards can double performance. Still most games will stick to one video card. The Valve hardware survey has SLI users at 2\% of valve customers. That's 2\% of <b>gamers</b> not everyday PC users. </p><p>

You don't need to spend 300 bucks on any video card. Most 100 dollar video card give you all the performance most people need. People are even playing video games on the Intel 965 embedded chipsets. Usually you see people using SLI if they've already pushed the upper limit on video cards. These are the people spending over 300 on each card and can't increase performance with a better card. They've hit the ceiling and SLI does deliver the performance.</p></htmltext>
<tokenext>why the hell would most people want to spend an extra 300 bucks or so to have an extra video card at only 25 \ % or less extra benefit in framerate ?
SLI will give you better than 25 \ % improved performance .
I hear the cards can double performance .
Still most games will stick to one video card .
The Valve hardware survey has SLI users at 2 \ % of valve customers .
That 's 2 \ % of gamers not everyday PC users .
You do n't need to spend 300 bucks on any video card .
Most 100 dollar video card give you all the performance most people need .
People are even playing video games on the Intel 965 embedded chipsets .
Usually you see people using SLI if they 've already pushed the upper limit on video cards .
These are the people spending over 300 on each card and ca n't increase performance with a better card .
They 've hit the ceiling and SLI does deliver the performance .</tokentext>
<sentencetext>why the hell would most people want to spend an extra 300 bucks or so to have an extra video card at only 25\% or less extra benefit in framerate?
SLI will give you better than 25\% improved performance.
I hear the cards can double performance.
Still most games will stick to one video card.
The Valve hardware survey has SLI users at 2\% of valve customers.
That's 2\% of gamers not everyday PC users.
You don't need to spend 300 bucks on any video card.
Most 100 dollar video card give you all the performance most people need.
People are even playing video games on the Intel 965 embedded chipsets.
Usually you see people using SLI if they've already pushed the upper limit on video cards.
These are the people spending over 300 on each card and can't increase performance with a better card.
They've hit the ceiling and SLI does deliver the performance.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398569</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399393</id>
	<title>Not a problem</title>
	<author>Deliveranc3</author>
	<datestamp>1245435180000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext>AMD = Value.<br> <br>SLI = Not Value.<br> <br>AMD has consistently shown that they want to put a computer at every set of hands on the planet. Geode, <a href="http://www.betanews.com/article/RadioShack-to-Sell-299-AMD-PC/1128100616" title="betanews.com">PIC</a> [betanews.com], OLPC. Now it would be nice if those computers had fast 3D graphics or GPU parralel processing, but that really seems like an easy way to waste the real power of computers.<br> <br> I have loved many Nvidia products in the past, but stepping away from AMD seems like a poor choice on Nvidia's part.</htmltext>
<tokenext>AMD = Value .
SLI = Not Value .
AMD has consistently shown that they want to put a computer at every set of hands on the planet .
Geode , PIC [ betanews.com ] , OLPC .
Now it would be nice if those computers had fast 3D graphics or GPU parralel processing , but that really seems like an easy way to waste the real power of computers .
I have loved many Nvidia products in the past , but stepping away from AMD seems like a poor choice on Nvidia 's part .</tokentext>
<sentencetext>AMD = Value.
SLI = Not Value.
AMD has consistently shown that they want to put a computer at every set of hands on the planet.
Geode, PIC [betanews.com], OLPC.
Now it would be nice if those computers had fast 3D graphics or GPU parralel processing, but that really seems like an easy way to waste the real power of computers.
I have loved many Nvidia products in the past, but stepping away from AMD seems like a poor choice on Nvidia's part.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398489</id>
	<title>I don't know but...</title>
	<author>carp3\_noct3m</author>
	<datestamp>1245424200000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext>This is pure conjecture, but to me it seemed as if when AMD and ATI became one team and Nvidia and Intel became the other, that it would make sense for each one to offer incentives (read: threats) so that their partner would not bend over for the competition. So its not like its completely up to Nvidia to start improving their standing with AMD because of pressure from Intel. If that made any sense, then I'll drink a couple more beers before posting next time. Out</htmltext>
<tokenext>This is pure conjecture , but to me it seemed as if when AMD and ATI became one team and Nvidia and Intel became the other , that it would make sense for each one to offer incentives ( read : threats ) so that their partner would not bend over for the competition .
So its not like its completely up to Nvidia to start improving their standing with AMD because of pressure from Intel .
If that made any sense , then I 'll drink a couple more beers before posting next time .
Out</tokentext>
<sentencetext>This is pure conjecture, but to me it seemed as if when AMD and ATI became one team and Nvidia and Intel became the other, that it would make sense for each one to offer incentives (read: threats) so that their partner would not bend over for the competition.
So its not like its completely up to Nvidia to start improving their standing with AMD because of pressure from Intel.
If that made any sense, then I'll drink a couple more beers before posting next time.
Out</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28402891</id>
	<title>Re:Nvidia</title>
	<author>Anonymous</author>
	<datestamp>1245522480000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>wasn't Asus an early adopter of Express Gate (instant-on Linux)?</p></htmltext>
<tokenext>was n't Asus an early adopter of Express Gate ( instant-on Linux ) ?</tokentext>
<sentencetext>wasn't Asus an early adopter of Express Gate (instant-on Linux)?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398707</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399173</id>
	<title>isn't sli just bs tech designed to sell more cards</title>
	<author>jollyreaper</author>
	<datestamp>1245432000000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>As I understand it, you don't really double your performance by putting two cards in. How many people seriously drop the coin to do this? Everything I've read says you'll get better bang for the buck by buying one good card, saving the money you would have spent on the second and then buying an equivalent card in three year's time that will kick the arse of the first card.</p></htmltext>
<tokenext>As I understand it , you do n't really double your performance by putting two cards in .
How many people seriously drop the coin to do this ?
Everything I 've read says you 'll get better bang for the buck by buying one good card , saving the money you would have spent on the second and then buying an equivalent card in three year 's time that will kick the arse of the first card .</tokentext>
<sentencetext>As I understand it, you don't really double your performance by putting two cards in.
How many people seriously drop the coin to do this?
Everything I've read says you'll get better bang for the buck by buying one good card, saving the money you would have spent on the second and then buying an equivalent card in three year's time that will kick the arse of the first card.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398633</id>
	<title>watch out</title>
	<author>Anonymous</author>
	<datestamp>1245425820000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The fanboys are coming!</p></htmltext>
<tokenext>The fanboys are coming !</tokentext>
<sentencetext>The fanboys are coming!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28401171</id>
	<title>fixed it for you</title>
	<author>Anonymous</author>
	<datestamp>1245507240000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>"...the Phenom II chips have reawakened interest in the platform and they have found a place in low end market's hearts again"</p><p>Seriously, welcome to 2006 AMD, you're just now matching Intel's 3y old architecture.  Unfortunately Core i7's been out for almost a year now, and is kicking --- and taking names.</p></htmltext>
<tokenext>" ...the Phenom II chips have reawakened interest in the platform and they have found a place in low end market 's hearts again " Seriously , welcome to 2006 AMD , you 're just now matching Intel 's 3y old architecture .
Unfortunately Core i7 's been out for almost a year now , and is kicking --- and taking names .</tokentext>
<sentencetext>"...the Phenom II chips have reawakened interest in the platform and they have found a place in low end market's hearts again"Seriously, welcome to 2006 AMD, you're just now matching Intel's 3y old architecture.
Unfortunately Core i7's been out for almost a year now, and is kicking --- and taking names.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28404833</id>
	<title>Meh.</title>
	<author>otis wildflower</author>
	<datestamp>1245496800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Not liking NVIDIA's stupidity lately with the mobile chipsets and hemming/hawing/lying, but they STILL have better binary drivers than ATI.</p><p>Here's a question for ATI fans: why can't I drive 2 separate monitors with different resolutions in whatever ATI's xinerama equivalent is?  TwinView can handle that.  Last time I tried with the ubuntu binary driver install, it put pillarboxes on my 1920x1080 screen to match res with the 1600x1200 screen, and that's just wee-t0dd-ed.  The windows 7 catalyst driver seemed to have no problem with that configuration.</p><p>The opensource driver for ATI also seems to handle it OK, but I get ridiculous pointer tearing all the time.</p><p>(been NVIDIA for like 10 years until this last card, a Sapphire 4870 1GB..  Maybe when Hackintosh Snow Leopard comes out I can put it on my box..)</p></htmltext>
<tokenext>Not liking NVIDIA 's stupidity lately with the mobile chipsets and hemming/hawing/lying , but they STILL have better binary drivers than ATI.Here 's a question for ATI fans : why ca n't I drive 2 separate monitors with different resolutions in whatever ATI 's xinerama equivalent is ?
TwinView can handle that .
Last time I tried with the ubuntu binary driver install , it put pillarboxes on my 1920x1080 screen to match res with the 1600x1200 screen , and that 's just wee-t0dd-ed .
The windows 7 catalyst driver seemed to have no problem with that configuration.The opensource driver for ATI also seems to handle it OK , but I get ridiculous pointer tearing all the time .
( been NVIDIA for like 10 years until this last card , a Sapphire 4870 1GB.. Maybe when Hackintosh Snow Leopard comes out I can put it on my box.. )</tokentext>
<sentencetext>Not liking NVIDIA's stupidity lately with the mobile chipsets and hemming/hawing/lying, but they STILL have better binary drivers than ATI.Here's a question for ATI fans: why can't I drive 2 separate monitors with different resolutions in whatever ATI's xinerama equivalent is?
TwinView can handle that.
Last time I tried with the ubuntu binary driver install, it put pillarboxes on my 1920x1080 screen to match res with the 1600x1200 screen, and that's just wee-t0dd-ed.
The windows 7 catalyst driver seemed to have no problem with that configuration.The opensource driver for ATI also seems to handle it OK, but I get ridiculous pointer tearing all the time.
(been NVIDIA for like 10 years until this last card, a Sapphire 4870 1GB..  Maybe when Hackintosh Snow Leopard comes out I can put it on my box..)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399647</id>
	<title>new nvidia chipset</title>
	<author>linu77</author>
	<datestamp>1245439080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>The nvida chipset are faster then AMD I have been using nvidia chipset since nforce 2 and I love it
Right now I am trying to find a gigabyte  GA-M720-US3 but a can't find it in the USA  it has the new
nForce 720D chipset</htmltext>
<tokenext>The nvida chipset are faster then AMD I have been using nvidia chipset since nforce 2 and I love it Right now I am trying to find a gigabyte GA-M720-US3 but a ca n't find it in the USA it has the new nForce 720D chipset</tokentext>
<sentencetext>The nvida chipset are faster then AMD I have been using nvidia chipset since nforce 2 and I love it
Right now I am trying to find a gigabyte  GA-M720-US3 but a can't find it in the USA  it has the new
nForce 720D chipset</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399243</id>
	<title>Re:I don't know but...</title>
	<author>hairyfeet</author>
	<datestamp>1245432960000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>I think the answer is a lot simpler and more pedestrian. Nvidia seriously got the snot kicked out of them for the whole "bad solder" bit and the covering up they and the OEMs tried to do, and simply don't have the cash to keep up with both AMD/ATI and Intel.</p><p>Somebody higher up probably said "go with the most market share" but of course now that Intel isn't giving them a license to produce for the  1366 pretty much leaves Nvidia with no chipset market at all if they lose the suit against Intel. Meanwhile the AMD/ATI chips have become pretty decent and run nice and cool compared to what my friends said about the last couple of Nvidia chips which according to them were space heaters.</p><p>Personally I think if Larrabee turns out to be good Nvidia is gonna be in serious trouble. I know a lot of folks right now that won't touch an Nvidia GPU because of the whole bad solder fiasco, and the new chipsets from ATI do all the tasks that most everyday folks use their PCs for. So unless they buy out Via and it ends up a three way, with Intel/Larrabee, AMD/ATI, and Nvidia/Via, I could foresee a future where Nvidia slowly gets squeezed out of the market. Which is probably why they are trying to push Ion and Tegra, to get some much needed traction in the mobile spaces.</p><p>

 But I personally don't think it is some big conspiracy, I just think Nvidia was hurt a lot worse by the bad solder fiasco than they are letting on and are having to prioritize their resources. But switching to Intel without a license for LGA1366 in place was a seriously dumb move, and I wouldn't be surprised if they end up out of the chipset market altogether.</p></htmltext>
<tokenext>I think the answer is a lot simpler and more pedestrian .
Nvidia seriously got the snot kicked out of them for the whole " bad solder " bit and the covering up they and the OEMs tried to do , and simply do n't have the cash to keep up with both AMD/ATI and Intel.Somebody higher up probably said " go with the most market share " but of course now that Intel is n't giving them a license to produce for the 1366 pretty much leaves Nvidia with no chipset market at all if they lose the suit against Intel .
Meanwhile the AMD/ATI chips have become pretty decent and run nice and cool compared to what my friends said about the last couple of Nvidia chips which according to them were space heaters.Personally I think if Larrabee turns out to be good Nvidia is gon na be in serious trouble .
I know a lot of folks right now that wo n't touch an Nvidia GPU because of the whole bad solder fiasco , and the new chipsets from ATI do all the tasks that most everyday folks use their PCs for .
So unless they buy out Via and it ends up a three way , with Intel/Larrabee , AMD/ATI , and Nvidia/Via , I could foresee a future where Nvidia slowly gets squeezed out of the market .
Which is probably why they are trying to push Ion and Tegra , to get some much needed traction in the mobile spaces .
But I personally do n't think it is some big conspiracy , I just think Nvidia was hurt a lot worse by the bad solder fiasco than they are letting on and are having to prioritize their resources .
But switching to Intel without a license for LGA1366 in place was a seriously dumb move , and I would n't be surprised if they end up out of the chipset market altogether .</tokentext>
<sentencetext>I think the answer is a lot simpler and more pedestrian.
Nvidia seriously got the snot kicked out of them for the whole "bad solder" bit and the covering up they and the OEMs tried to do, and simply don't have the cash to keep up with both AMD/ATI and Intel.Somebody higher up probably said "go with the most market share" but of course now that Intel isn't giving them a license to produce for the  1366 pretty much leaves Nvidia with no chipset market at all if they lose the suit against Intel.
Meanwhile the AMD/ATI chips have become pretty decent and run nice and cool compared to what my friends said about the last couple of Nvidia chips which according to them were space heaters.Personally I think if Larrabee turns out to be good Nvidia is gonna be in serious trouble.
I know a lot of folks right now that won't touch an Nvidia GPU because of the whole bad solder fiasco, and the new chipsets from ATI do all the tasks that most everyday folks use their PCs for.
So unless they buy out Via and it ends up a three way, with Intel/Larrabee, AMD/ATI, and Nvidia/Via, I could foresee a future where Nvidia slowly gets squeezed out of the market.
Which is probably why they are trying to push Ion and Tegra, to get some much needed traction in the mobile spaces.
But I personally don't think it is some big conspiracy, I just think Nvidia was hurt a lot worse by the bad solder fiasco than they are letting on and are having to prioritize their resources.
But switching to Intel without a license for LGA1366 in place was a seriously dumb move, and I wouldn't be surprised if they end up out of the chipset market altogether.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398689</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398497</id>
	<title>Well...</title>
	<author>Anonymous</author>
	<datestamp>1245424260000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext>Looks like no more NVIDIA for me, time to research what ATI has available. I like my AMD chips.</htmltext>
<tokenext>Looks like no more NVIDIA for me , time to research what ATI has available .
I like my AMD chips .</tokentext>
<sentencetext>Looks like no more NVIDIA for me, time to research what ATI has available.
I like my AMD chips.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398797</id>
	<title>Ready...Aim...Fire</title>
	<author>Nom du Keyboard</author>
	<datestamp>1245427680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>1: Cock gun.<br>
2: Aim at foot.</htmltext>
<tokenext>1 : Cock gun .
2 : Aim at foot .</tokentext>
<sentencetext>1: Cock gun.
2: Aim at foot.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398977</id>
	<title>Re:amd</title>
	<author>mysidia</author>
	<datestamp>1245429600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>
Getting Dell servers powered by AMD Opterons, which seem to run just as well as the Intel equivalent at a fraction of the price.
</p><p>
Oh, and SLI support doesn't matter in the datacenter, and for 90\% of computer users.
</p><p>
It's the 10\% of computer users who are extreme gamers willing to buy $1000 video cards and $5000 computer setups that feel they <em>have</em> to have SLI or Crossfire.
</p><p>
No biggie, they can buy ATI cards and curse nVidia for forsaking them.
</p></htmltext>
<tokenext>Getting Dell servers powered by AMD Opterons , which seem to run just as well as the Intel equivalent at a fraction of the price .
Oh , and SLI support does n't matter in the datacenter , and for 90 \ % of computer users .
It 's the 10 \ % of computer users who are extreme gamers willing to buy $ 1000 video cards and $ 5000 computer setups that feel they have to have SLI or Crossfire .
No biggie , they can buy ATI cards and curse nVidia for forsaking them .</tokentext>
<sentencetext>
Getting Dell servers powered by AMD Opterons, which seem to run just as well as the Intel equivalent at a fraction of the price.
Oh, and SLI support doesn't matter in the datacenter, and for 90\% of computer users.
It's the 10\% of computer users who are extreme gamers willing to buy $1000 video cards and $5000 computer setups that feel they have to have SLI or Crossfire.
No biggie, they can buy ATI cards and curse nVidia for forsaking them.
</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398561</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28403035</id>
	<title>Re:Well...</title>
	<author>Anonymous</author>
	<datestamp>1245523680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Call me when you catch up to 1999.  I've had both ATI and Nvidia cards in my house at all times since the 90s.  ATI had some issues in the past, but they've been fine for years now.  For quite a while now, the only issue I have with them is the annoyance of having to reinstall so many things on the all in wonder cards when you upgrade your drivers.  On a regular card, I haven't seen an issue for a long time.</p></htmltext>
<tokenext>Call me when you catch up to 1999 .
I 've had both ATI and Nvidia cards in my house at all times since the 90s .
ATI had some issues in the past , but they 've been fine for years now .
For quite a while now , the only issue I have with them is the annoyance of having to reinstall so many things on the all in wonder cards when you upgrade your drivers .
On a regular card , I have n't seen an issue for a long time .</tokentext>
<sentencetext>Call me when you catch up to 1999.
I've had both ATI and Nvidia cards in my house at all times since the 90s.
ATI had some issues in the past, but they've been fine for years now.
For quite a while now, the only issue I have with them is the annoyance of having to reinstall so many things on the all in wonder cards when you upgrade your drivers.
On a regular card, I haven't seen an issue for a long time.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398669</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28400685</id>
	<title>Abacadabra</title>
	<author>Anonymous</author>
	<datestamp>1245498780000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Neither the summary nor the article take the trouble to explain what SLI is.</p><p> <a href="http://en.wikipedia.org/wiki/Scalable\_Link\_Interface" title="wikipedia.org" rel="nofollow">Wikipedia:</a> [wikipedia.org] </p><blockquote><div><p>Scalable Link Interface (SLI) is a brand name for a multi-GPU solution developed by Nvidia for linking two or more video cards together to produce a single output.</p></div></blockquote><p>Just one extra sentence in the summary would have saved me and countless others the trouble of having to look it up to decide whether I'm interested or not.</p></div>
	</htmltext>
<tokenext>Neither the summary nor the article take the trouble to explain what SLI is .
Wikipedia : [ wikipedia.org ] Scalable Link Interface ( SLI ) is a brand name for a multi-GPU solution developed by Nvidia for linking two or more video cards together to produce a single output.Just one extra sentence in the summary would have saved me and countless others the trouble of having to look it up to decide whether I 'm interested or not .</tokentext>
<sentencetext>Neither the summary nor the article take the trouble to explain what SLI is.
Wikipedia: [wikipedia.org] Scalable Link Interface (SLI) is a brand name for a multi-GPU solution developed by Nvidia for linking two or more video cards together to produce a single output.Just one extra sentence in the summary would have saved me and countless others the trouble of having to look it up to decide whether I'm interested or not.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398707</id>
	<title>Nvidia</title>
	<author>Anonymous</author>
	<datestamp>1245426540000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>Asus has jumped in bed with Microsoft as of late.  With AMD's purchase of ATI and promise of open source drivers and Nvidia's failure to move forward in open source, Nvidia and Asus has seen the last dollar of mine.</p></htmltext>
<tokenext>Asus has jumped in bed with Microsoft as of late .
With AMD 's purchase of ATI and promise of open source drivers and Nvidia 's failure to move forward in open source , Nvidia and Asus has seen the last dollar of mine .</tokentext>
<sentencetext>Asus has jumped in bed with Microsoft as of late.
With AMD's purchase of ATI and promise of open source drivers and Nvidia's failure to move forward in open source, Nvidia and Asus has seen the last dollar of mine.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28405729</id>
	<title>Re:Who CARES about SLI?</title>
	<author>averner</author>
	<datestamp>1245502260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>And why should anyone really, when modern day consoles cost about the same amount as one would spend on a moderately high end processor + video card, why the hell would most people want to spend an extra 300 bucks or so to have an extra video card at only 25\% or less extra benefit in framerate?</p></div><p>Because they're unreliable, tend to overheat, and have expensive games that run at low framerates.<br> <br>

If you build your own PC you can make it out of highly-rated (by other purchasers of course, don't trust "consumer reporting" agencies) parts and carefully cool it so it works nicely.  Building it from scratch this way will be more expensive than getting a console, but if you're just upgrading it like you are saying in your post, it won't be.</p></div>
	</htmltext>
<tokenext>And why should anyone really , when modern day consoles cost about the same amount as one would spend on a moderately high end processor + video card , why the hell would most people want to spend an extra 300 bucks or so to have an extra video card at only 25 \ % or less extra benefit in framerate ? Because they 're unreliable , tend to overheat , and have expensive games that run at low framerates .
If you build your own PC you can make it out of highly-rated ( by other purchasers of course , do n't trust " consumer reporting " agencies ) parts and carefully cool it so it works nicely .
Building it from scratch this way will be more expensive than getting a console , but if you 're just upgrading it like you are saying in your post , it wo n't be .</tokentext>
<sentencetext>And why should anyone really, when modern day consoles cost about the same amount as one would spend on a moderately high end processor + video card, why the hell would most people want to spend an extra 300 bucks or so to have an extra video card at only 25\% or less extra benefit in framerate?Because they're unreliable, tend to overheat, and have expensive games that run at low framerates.
If you build your own PC you can make it out of highly-rated (by other purchasers of course, don't trust "consumer reporting" agencies) parts and carefully cool it so it works nicely.
Building it from scratch this way will be more expensive than getting a console, but if you're just upgrading it like you are saying in your post, it won't be.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398569</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28401101</id>
	<title>Re:amd</title>
	<author>macshit</author>
	<datestamp>1245506100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>I hope this comment is in jest pointing out that people have been claiming that AMD is in trouble for years</p></div><p>I never quite figured that out.  Sure AMD's had a rough patch in recent years, but Intel spent <em>years</em> churning out crap sub-standard processors.  Intel tried to fix their problems, and have come back with great products; there's no reason to think AMD can't do the same -- and indeed the Phenom II seems to be excellent (it doesn't completely crush Intel's offerings like AMD's products did a few years ago, but Intel's not turning out complete crap these days).
</p><p>What was particularly surprising to me, though, is how <em>quickly</em>, and with what vehemency, people started declaring "AMD is finished!1!" after the core2 proved a better CPU that what AMD was offering at the time.  I got the weird feeling that some of these people had been just <em>waiting</em><nobr> <wbr></nobr>....</p></div>
	</htmltext>
<tokenext>I hope this comment is in jest pointing out that people have been claiming that AMD is in trouble for yearsI never quite figured that out .
Sure AMD 's had a rough patch in recent years , but Intel spent years churning out crap sub-standard processors .
Intel tried to fix their problems , and have come back with great products ; there 's no reason to think AMD ca n't do the same -- and indeed the Phenom II seems to be excellent ( it does n't completely crush Intel 's offerings like AMD 's products did a few years ago , but Intel 's not turning out complete crap these days ) .
What was particularly surprising to me , though , is how quickly , and with what vehemency , people started declaring " AMD is finished ! 1 !
" after the core2 proved a better CPU that what AMD was offering at the time .
I got the weird feeling that some of these people had been just waiting ... .</tokentext>
<sentencetext>I hope this comment is in jest pointing out that people have been claiming that AMD is in trouble for yearsI never quite figured that out.
Sure AMD's had a rough patch in recent years, but Intel spent years churning out crap sub-standard processors.
Intel tried to fix their problems, and have come back with great products; there's no reason to think AMD can't do the same -- and indeed the Phenom II seems to be excellent (it doesn't completely crush Intel's offerings like AMD's products did a few years ago, but Intel's not turning out complete crap these days).
What was particularly surprising to me, though, is how quickly, and with what vehemency, people started declaring "AMD is finished!1!
" after the core2 proved a better CPU that what AMD was offering at the time.
I got the weird feeling that some of these people had been just waiting ....
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398757</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28411319</id>
	<title>sli is dead and so is nvidia</title>
	<author>Anonymous</author>
	<datestamp>1245604800000</datestamp>
	<modclass>None</modclass>
	<modscore>-1</modscore>
	<htmltext><p>as i been reading on the internet about this i also have to say is nvidia is DEAD with intel blocking nvidia nforce chipset for the new processor and with nothing coming to AMD.</p><p>it really comes down to this...</p><p>1. sli dead<br>2. cuda, not dead but useless with out sli<br>3. physicx will die the day that nvidia dies<br>4. apple is in trubble if nvidia dies.</p><p>as i see it if nvidia dies, pc gaming will be in a world of trubble.</p></htmltext>
<tokenext>as i been reading on the internet about this i also have to say is nvidia is DEAD with intel blocking nvidia nforce chipset for the new processor and with nothing coming to AMD.it really comes down to this...1. sli dead2 .
cuda , not dead but useless with out sli3 .
physicx will die the day that nvidia dies4 .
apple is in trubble if nvidia dies.as i see it if nvidia dies , pc gaming will be in a world of trubble .</tokentext>
<sentencetext>as i been reading on the internet about this i also have to say is nvidia is DEAD with intel blocking nvidia nforce chipset for the new processor and with nothing coming to AMD.it really comes down to this...1. sli dead2.
cuda, not dead but useless with out sli3.
physicx will die the day that nvidia dies4.
apple is in trubble if nvidia dies.as i see it if nvidia dies, pc gaming will be in a world of trubble.</sentencetext>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28403099
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398569
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28403035
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398669
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398497
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28400019
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398881
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398539
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28400639
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398569
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399011
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398569
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28407723
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398669
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398497
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28401853
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398881
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398539
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28408055
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398539
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28401367
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398569
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398829
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398539
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28400059
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399291
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28406917
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398489
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398977
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398561
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398491
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28402891
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398707
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398815
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398489
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28405849
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399173
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28401101
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398757
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398561
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398491
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399327
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398949
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398489
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28405511
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399173
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28400509
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399173
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28402547
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398569
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28419081
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399173
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28400067
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398539
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399243
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398689
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398489
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28401321
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399291
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28405729
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398569
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_19_2331209_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28400213
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398489
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_19_2331209.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398539
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28400067
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398829
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398881
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28400019
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28401853
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28408055
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_19_2331209.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399173
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28405849
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28405511
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28419081
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28400509
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_19_2331209.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398497
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398669
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28403035
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28407723
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_19_2331209.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398707
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28402891
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_19_2331209.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398797
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_19_2331209.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398489
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398815
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398689
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399243
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28406917
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28400213
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398949
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399327
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_19_2331209.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398491
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398561
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398977
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398757
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28401101
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_19_2331209.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399291
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28400059
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28401321
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_19_2331209.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398569
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28399011
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28405729
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28401367
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28400639
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28403099
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28402547
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_19_2331209.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28401035
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_19_2331209.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398739
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_19_2331209.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398495
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_19_2331209.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28401171
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_19_2331209.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_19_2331209.28398717
</commentlist>
</conversation>
