<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_12_05_005234</id>
	<title>Intel Kills Consumer Larrabee Plans</title>
	<author>Soulskill</author>
	<datestamp>1260018360000</datestamp>
	<htmltext>An anonymous reader tips news that Intel has <a href="http://www.semiaccurate.com/2009/12/04/intel-kills-consumer-larrabee-focuses-future-variants/">canceled plans for a consumer version</a> of their long-awaited and oft-delayed Larrabee chip, opting instead to use it as a development platform product. From VentureBeat:
<i>"'Larrabee silicon and software development are behind where we had hoped to be at this point in the project,' said Nick Knuppfler, a spokesman for Intel in Santa Clara, Calif. '<a href="http://venturebeat.com/2009/12/04/intel-cancels-larrabee-consumer-graphics-chip/">Larrabee will not be a consumer product</a>.' In other words, it&rsquo;s not entirely dead. It&rsquo;s mostly dead. Instead of launching the chip in the consumer market, it will make it available as a software development platform for both internal and external developers. Those developers can use it to develop software that can run in high-performance computers. But Knuppfler said that Intel will continue to work on stand-alone graphics chip designs. He said the company would have more to say about that in 2010."</i></htmltext>
<tokenext>An anonymous reader tips news that Intel has canceled plans for a consumer version of their long-awaited and oft-delayed Larrabee chip , opting instead to use it as a development platform product .
From VentureBeat : " 'Larrabee silicon and software development are behind where we had hoped to be at this point in the project, ' said Nick Knuppfler , a spokesman for Intel in Santa Clara , Calif. 'Larrabee will not be a consumer product .
' In other words , it    s not entirely dead .
It    s mostly dead .
Instead of launching the chip in the consumer market , it will make it available as a software development platform for both internal and external developers .
Those developers can use it to develop software that can run in high-performance computers .
But Knuppfler said that Intel will continue to work on stand-alone graphics chip designs .
He said the company would have more to say about that in 2010 .
"</tokentext>
<sentencetext>An anonymous reader tips news that Intel has canceled plans for a consumer version of their long-awaited and oft-delayed Larrabee chip, opting instead to use it as a development platform product.
From VentureBeat:
"'Larrabee silicon and software development are behind where we had hoped to be at this point in the project,' said Nick Knuppfler, a spokesman for Intel in Santa Clara, Calif. 'Larrabee will not be a consumer product.
' In other words, it’s not entirely dead.
It’s mostly dead.
Instead of launching the chip in the consumer market, it will make it available as a software development platform for both internal and external developers.
Those developers can use it to develop software that can run in high-performance computers.
But Knuppfler said that Intel will continue to work on stand-alone graphics chip designs.
He said the company would have more to say about that in 2010.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333212</id>
	<title>Likely the Intel PowerVR partnership</title>
	<author>Criton</author>
	<datestamp>1260043260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I suspect the Intel and PowerVR partnership may have have something to do with no consumer Larrabee plans.
This partnership already has resulted in the 3100ce and PowervR has been working on some 1080p media accelerators.
Larrabee does use a lot of power for the level of performance it would offer as a 3D chipset perhaps Intel and PowerVR have came with with something that does not use 160watts.</htmltext>
<tokenext>I suspect the Intel and PowerVR partnership may have have something to do with no consumer Larrabee plans .
This partnership already has resulted in the 3100ce and PowervR has been working on some 1080p media accelerators .
Larrabee does use a lot of power for the level of performance it would offer as a 3D chipset perhaps Intel and PowerVR have came with with something that does not use 160watts .</tokentext>
<sentencetext>I suspect the Intel and PowerVR partnership may have have something to do with no consumer Larrabee plans.
This partnership already has resulted in the 3100ce and PowervR has been working on some 1080p media accelerators.
Larrabee does use a lot of power for the level of performance it would offer as a 3D chipset perhaps Intel and PowerVR have came with with something that does not use 160watts.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332144</id>
	<title>" In other words, it's not entirely dead."</title>
	<author>Anonymous Poodle</author>
	<datestamp>1259941800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I'm not dead yet!"</p></htmltext>
<tokenext>I 'm not dead yet !
"</tokentext>
<sentencetext>I'm not dead yet!
"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332180</id>
	<title>Re:Heterogeneous Processors Are Doomed</title>
	<author>Anonymous</author>
	<datestamp>1259942460000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Because a fringe pseudoscience blogger has completely pwned the thousands of engineers at Intel. Yep.</p><p>Oh, the post above *sounds* reasonable. But then you poke around and find things like this:</p><p>http://rebelscience.blogspot.com/2009/11/lattice-propulsion-one-more-clue.html</p><p>From that article:</p><p><div class="quote"><p>The electrostatic field between two charged parallel surfaces consists of opposite-facing seraphim being emitted by the plates. The seraphim reaching the plates interact with the plate particles.</p></div><p>Clearly somebody needs their meds adjusted...</p></div>
	</htmltext>
<tokenext>Because a fringe pseudoscience blogger has completely pwned the thousands of engineers at Intel .
Yep.Oh , the post above * sounds * reasonable .
But then you poke around and find things like this : http : //rebelscience.blogspot.com/2009/11/lattice-propulsion-one-more-clue.htmlFrom that article : The electrostatic field between two charged parallel surfaces consists of opposite-facing seraphim being emitted by the plates .
The seraphim reaching the plates interact with the plate particles.Clearly somebody needs their meds adjusted.. .</tokentext>
<sentencetext>Because a fringe pseudoscience blogger has completely pwned the thousands of engineers at Intel.
Yep.Oh, the post above *sounds* reasonable.
But then you poke around and find things like this:http://rebelscience.blogspot.com/2009/11/lattice-propulsion-one-more-clue.htmlFrom that article:The electrostatic field between two charged parallel surfaces consists of opposite-facing seraphim being emitted by the plates.
The seraphim reaching the plates interact with the plate particles.Clearly somebody needs their meds adjusted...
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331868</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332194</id>
	<title>Parrot-like</title>
	<author>grasshoppa</author>
	<datestamp>1259942700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Maybe it's just resting?<br>Stunned?<br>Pining for the fjords?</p><p>I'll show myself out.</p></htmltext>
<tokenext>Maybe it 's just resting ? Stunned ? Pining for the fjords ? I 'll show myself out .</tokentext>
<sentencetext>Maybe it's just resting?Stunned?Pining for the fjords?I'll show myself out.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332474</id>
	<title>Re:Oh rats</title>
	<author>Lemming Mark</author>
	<datestamp>1259945940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Craptastic as the Intel cards may be, in overall performance terms, I could happily take any of the integrated parts by Intel that has decent Linux support on my next desktop, even if that meant a massive reduction in performance.  I have an Xbox 360 for playing games on and I would love for my desktop to Just Work as well as my Eee does with Linux.  That said, with ATI cards getting better and better support under Linux it is quite possible that they'll be the best option by the time I upgrade again...</p></htmltext>
<tokenext>Craptastic as the Intel cards may be , in overall performance terms , I could happily take any of the integrated parts by Intel that has decent Linux support on my next desktop , even if that meant a massive reduction in performance .
I have an Xbox 360 for playing games on and I would love for my desktop to Just Work as well as my Eee does with Linux .
That said , with ATI cards getting better and better support under Linux it is quite possible that they 'll be the best option by the time I upgrade again.. .</tokentext>
<sentencetext>Craptastic as the Intel cards may be, in overall performance terms, I could happily take any of the integrated parts by Intel that has decent Linux support on my next desktop, even if that meant a massive reduction in performance.
I have an Xbox 360 for playing games on and I would love for my desktop to Just Work as well as my Eee does with Linux.
That said, with ATI cards getting better and better support under Linux it is quite possible that they'll be the best option by the time I upgrade again...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331672</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30334862</id>
	<title>Re:So the next mini, low end imac and 13" macbook'</title>
	<author>agnosticnixie</author>
	<datestamp>1260028260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>"The new macbook pro, now with AMD... and only 3 hours of battery"</p><p>Somehow I think AMD still has a few things to learn about mobiles, and that's the mac's main market.</p></htmltext>
<tokenext>" The new macbook pro , now with AMD... and only 3 hours of battery " Somehow I think AMD still has a few things to learn about mobiles , and that 's the mac 's main market .</tokentext>
<sentencetext>"The new macbook pro, now with AMD... and only 3 hours of battery"Somehow I think AMD still has a few things to learn about mobiles, and that's the mac's main market.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331784</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333682</id>
	<title>Re:Wow... shock horror</title>
	<author>robbiedo</author>
	<datestamp>1260009180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>If I worked at Intel in the group developing a product, I would keep my mouth shut, even if I was an intern. There are possibly a large group of smart dedicated people trying to make this happen.</htmltext>
<tokenext>If I worked at Intel in the group developing a product , I would keep my mouth shut , even if I was an intern .
There are possibly a large group of smart dedicated people trying to make this happen .</tokentext>
<sentencetext>If I worked at Intel in the group developing a product, I would keep my mouth shut, even if I was an intern.
There are possibly a large group of smart dedicated people trying to make this happen.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331890</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333560</id>
	<title>Re:Larrabee = Graphics Chip competing w nVidia</title>
	<author>Anonymous</author>
	<datestamp>1260007020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Why doesn't Intel just stop trying to create GPU's? I've got a new laptop with one of their graphics chipsets, and it absolutely sucks. Seems like Intel should stick to 'normal' processors.</htmltext>
<tokenext>Why does n't Intel just stop trying to create GPU 's ?
I 've got a new laptop with one of their graphics chipsets , and it absolutely sucks .
Seems like Intel should stick to 'normal ' processors .</tokentext>
<sentencetext>Why doesn't Intel just stop trying to create GPU's?
I've got a new laptop with one of their graphics chipsets, and it absolutely sucks.
Seems like Intel should stick to 'normal' processors.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331648</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332442</id>
	<title>Re:Oh rats</title>
	<author>alvinrod</author>
	<datestamp>1259945700000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>I don't know about that. Intel's offerings that are slated to come out 1Q - 1H of 2010 could give AMD some problems. Right now AMD has the performance advantage in the server space, but <a href="http://en.wikipedia.org/wiki/Gulftown\_(microprocessor)" title="wikipedia.org">Gulftown</a> [wikipedia.org] will likely trump their offerings. <a href="http://en.wikipedia.org/wiki/Arrandale\_(microprocessor)" title="wikipedia.org">Arrandale</a> [wikipedia.org] also looks quite impressive, especially the quad core i7 with an 18 watt TDP. The cores only run at 1.2 GHz, but with their Turbo boost the chip can clock up to 2.2 GHz. That will offer some amazing battery life for laptops and still provide good performance. I do believe some of the Arrandale processors will have a GPU on die as well. Granted it's an Intel GPU, but it offers some great power and cost savings over having to include a discrete card.
<br> <br>
AMD doesn't look to have anything great coming out until late 2010 or early 2011 based on their <a href="http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3673" title="anandtech.com">roadmap</a> [anandtech.com]. It helps that ATI is kicking ass in the graphics space. Right now they're winning on price and power. If they can get more of their 5800 series out in the market and release the mobile versions of those cards sooner rather than later, they'll be able to push a lot of hardware that way. However, they're not a real threat to Intel until they can get their SOC products out the door and offer a really compelling reason to go with their products.
<br> <br>
Settling their legal issues with Intel will also help them a lot in the long run, but they're not out of the woods yet. They're still having financial problems, but if they can get through the next 18 months they'll be in great shape. The fact that they've been ahead of schedule on a lot of their new chips in the last year has probably helped substantially as well. AMD is in good position for the long term, but they need to decent sales in the coming quarters, which may be difficult to do with Intel releasing a lot of great new chips, especially in the mobile market where AMD hasn't been particularly strong recently.</htmltext>
<tokenext>I do n't know about that .
Intel 's offerings that are slated to come out 1Q - 1H of 2010 could give AMD some problems .
Right now AMD has the performance advantage in the server space , but Gulftown [ wikipedia.org ] will likely trump their offerings .
Arrandale [ wikipedia.org ] also looks quite impressive , especially the quad core i7 with an 18 watt TDP .
The cores only run at 1.2 GHz , but with their Turbo boost the chip can clock up to 2.2 GHz .
That will offer some amazing battery life for laptops and still provide good performance .
I do believe some of the Arrandale processors will have a GPU on die as well .
Granted it 's an Intel GPU , but it offers some great power and cost savings over having to include a discrete card .
AMD does n't look to have anything great coming out until late 2010 or early 2011 based on their roadmap [ anandtech.com ] .
It helps that ATI is kicking ass in the graphics space .
Right now they 're winning on price and power .
If they can get more of their 5800 series out in the market and release the mobile versions of those cards sooner rather than later , they 'll be able to push a lot of hardware that way .
However , they 're not a real threat to Intel until they can get their SOC products out the door and offer a really compelling reason to go with their products .
Settling their legal issues with Intel will also help them a lot in the long run , but they 're not out of the woods yet .
They 're still having financial problems , but if they can get through the next 18 months they 'll be in great shape .
The fact that they 've been ahead of schedule on a lot of their new chips in the last year has probably helped substantially as well .
AMD is in good position for the long term , but they need to decent sales in the coming quarters , which may be difficult to do with Intel releasing a lot of great new chips , especially in the mobile market where AMD has n't been particularly strong recently .</tokentext>
<sentencetext>I don't know about that.
Intel's offerings that are slated to come out 1Q - 1H of 2010 could give AMD some problems.
Right now AMD has the performance advantage in the server space, but Gulftown [wikipedia.org] will likely trump their offerings.
Arrandale [wikipedia.org] also looks quite impressive, especially the quad core i7 with an 18 watt TDP.
The cores only run at 1.2 GHz, but with their Turbo boost the chip can clock up to 2.2 GHz.
That will offer some amazing battery life for laptops and still provide good performance.
I do believe some of the Arrandale processors will have a GPU on die as well.
Granted it's an Intel GPU, but it offers some great power and cost savings over having to include a discrete card.
AMD doesn't look to have anything great coming out until late 2010 or early 2011 based on their roadmap [anandtech.com].
It helps that ATI is kicking ass in the graphics space.
Right now they're winning on price and power.
If they can get more of their 5800 series out in the market and release the mobile versions of those cards sooner rather than later, they'll be able to push a lot of hardware that way.
However, they're not a real threat to Intel until they can get their SOC products out the door and offer a really compelling reason to go with their products.
Settling their legal issues with Intel will also help them a lot in the long run, but they're not out of the woods yet.
They're still having financial problems, but if they can get through the next 18 months they'll be in great shape.
The fact that they've been ahead of schedule on a lot of their new chips in the last year has probably helped substantially as well.
AMD is in good position for the long term, but they need to decent sales in the coming quarters, which may be difficult to do with Intel releasing a lot of great new chips, especially in the mobile market where AMD hasn't been particularly strong recently.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331672</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333696</id>
	<title>Re:Oh rats</title>
	<author>this great guy</author>
	<datestamp>1260009300000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>
ATI are <i>about</i> to become the leader? They <b>are</b> already the leader in <b>all</b> categories: perf/$, perf/W, absolute perf, and at all price points. See list below. For gaming performance, the GFLOPS rating are a roughly (+/- 30\%) good enough approximation to compare ATI vs. Nvidia. For GPGPU performance, the GFLOPS rating is actually unfair to ATI because Nvidia's GT200 microarchitecture causes it to be artificially inflated (they assume a MUL+MAD pair executing 3 floating-point op per cycle, whereas ATI assumes a regular fused MAD executing 2 floatting-point ops per cycle). Meaning that an ATI GPU rated 200 GFLOPS actually executes ALU-bound workloads as fast as an Nvidia GPU rated 300 GFLOPS. ATI's lead is such that it's not even funny anymore. There are rumors of Nvidia killing the high-end (GTX 285, 295) to focus only on the extreme entry-level segment (sub-$100). And GT300 (Fermi) will not enter mass production before the end of Q1 2010. I am concerned by the lack of competition... ATI is free to impose whatever price structure they want.
</p><ul> <li>
If you have $500+ to spend: ATI HD 5970 (4640 GFLOPS, 294 Watt, ~$600) vs. Nvidia GTX 295 (1843 GFLOPS, 289 Watt, ~$500).
</li><li>
If you have ~$400 to spend: ATI HD 5870 (2720 GFLOPS, 188 Watt, ~$410) vs. Nvidia GTX 285 (1063 GFLOPS, 204 Watt, ~$400).
</li><li>
If you have ~$300 to spend: ATI HD 5850 (2088 GFLOPS, 151 Watt, ~$310) vs. Nvidia GTX 275 (1011 GFLOPS, 219 Watt, ~$300).
</li><li>
If you have ~$200 to spend: ATI HD 5770 (1360 GFLOPS, 108 Watt, ~$170) vs. Nvidia GTX 260 Core 216 (805 GFLOPS, 182 Watt, ~$200).
</li><li>
If you have ~$150 to spend: ATI HD 5750 (1088 GFLOPS, 86 Watt, ~$155) vs. Nvidia GTX 260 (715 GFLOPS, 182 Watt, ~$170).
</li><li>
If you have ~$100 to spend: ATI HD 4770 (960 GFLOPS, 80 Watt, ~$110) vs. Nvidia GTS 250 (470 GFLOPS, 145 Watt, ~$110).
</li></ul></htmltext>
<tokenext>ATI are about to become the leader ?
They are already the leader in all categories : perf/ $ , perf/W , absolute perf , and at all price points .
See list below .
For gaming performance , the GFLOPS rating are a roughly ( + /- 30 \ % ) good enough approximation to compare ATI vs. Nvidia. For GPGPU performance , the GFLOPS rating is actually unfair to ATI because Nvidia 's GT200 microarchitecture causes it to be artificially inflated ( they assume a MUL + MAD pair executing 3 floating-point op per cycle , whereas ATI assumes a regular fused MAD executing 2 floatting-point ops per cycle ) .
Meaning that an ATI GPU rated 200 GFLOPS actually executes ALU-bound workloads as fast as an Nvidia GPU rated 300 GFLOPS .
ATI 's lead is such that it 's not even funny anymore .
There are rumors of Nvidia killing the high-end ( GTX 285 , 295 ) to focus only on the extreme entry-level segment ( sub- $ 100 ) .
And GT300 ( Fermi ) will not enter mass production before the end of Q1 2010 .
I am concerned by the lack of competition... ATI is free to impose whatever price structure they want .
If you have $ 500 + to spend : ATI HD 5970 ( 4640 GFLOPS , 294 Watt , ~ $ 600 ) vs. Nvidia GTX 295 ( 1843 GFLOPS , 289 Watt , ~ $ 500 ) .
If you have ~ $ 400 to spend : ATI HD 5870 ( 2720 GFLOPS , 188 Watt , ~ $ 410 ) vs. Nvidia GTX 285 ( 1063 GFLOPS , 204 Watt , ~ $ 400 ) .
If you have ~ $ 300 to spend : ATI HD 5850 ( 2088 GFLOPS , 151 Watt , ~ $ 310 ) vs. Nvidia GTX 275 ( 1011 GFLOPS , 219 Watt , ~ $ 300 ) .
If you have ~ $ 200 to spend : ATI HD 5770 ( 1360 GFLOPS , 108 Watt , ~ $ 170 ) vs. Nvidia GTX 260 Core 216 ( 805 GFLOPS , 182 Watt , ~ $ 200 ) .
If you have ~ $ 150 to spend : ATI HD 5750 ( 1088 GFLOPS , 86 Watt , ~ $ 155 ) vs. Nvidia GTX 260 ( 715 GFLOPS , 182 Watt , ~ $ 170 ) .
If you have ~ $ 100 to spend : ATI HD 4770 ( 960 GFLOPS , 80 Watt , ~ $ 110 ) vs. Nvidia GTS 250 ( 470 GFLOPS , 145 Watt , ~ $ 110 ) .</tokentext>
<sentencetext>
ATI are about to become the leader?
They are already the leader in all categories: perf/$, perf/W, absolute perf, and at all price points.
See list below.
For gaming performance, the GFLOPS rating are a roughly (+/- 30\%) good enough approximation to compare ATI vs. Nvidia. For GPGPU performance, the GFLOPS rating is actually unfair to ATI because Nvidia's GT200 microarchitecture causes it to be artificially inflated (they assume a MUL+MAD pair executing 3 floating-point op per cycle, whereas ATI assumes a regular fused MAD executing 2 floatting-point ops per cycle).
Meaning that an ATI GPU rated 200 GFLOPS actually executes ALU-bound workloads as fast as an Nvidia GPU rated 300 GFLOPS.
ATI's lead is such that it's not even funny anymore.
There are rumors of Nvidia killing the high-end (GTX 285, 295) to focus only on the extreme entry-level segment (sub-$100).
And GT300 (Fermi) will not enter mass production before the end of Q1 2010.
I am concerned by the lack of competition... ATI is free to impose whatever price structure they want.
If you have $500+ to spend: ATI HD 5970 (4640 GFLOPS, 294 Watt, ~$600) vs. Nvidia GTX 295 (1843 GFLOPS, 289 Watt, ~$500).
If you have ~$400 to spend: ATI HD 5870 (2720 GFLOPS, 188 Watt, ~$410) vs. Nvidia GTX 285 (1063 GFLOPS, 204 Watt, ~$400).
If you have ~$300 to spend: ATI HD 5850 (2088 GFLOPS, 151 Watt, ~$310) vs. Nvidia GTX 275 (1011 GFLOPS, 219 Watt, ~$300).
If you have ~$200 to spend: ATI HD 5770 (1360 GFLOPS, 108 Watt, ~$170) vs. Nvidia GTX 260 Core 216 (805 GFLOPS, 182 Watt, ~$200).
If you have ~$150 to spend: ATI HD 5750 (1088 GFLOPS, 86 Watt, ~$155) vs. Nvidia GTX 260 (715 GFLOPS, 182 Watt, ~$170).
If you have ~$100 to spend: ATI HD 4770 (960 GFLOPS, 80 Watt, ~$110) vs. Nvidia GTS 250 (470 GFLOPS, 145 Watt, ~$110).
</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331672</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332072</id>
	<title>Mis-reported, I think.</title>
	<author>Anonymous</author>
	<datestamp>1259940420000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>This is being mis-reported or mis-communicated by Intel, I believe.<br>The first version of Larrabee silicon isn't going to consumers, that's all.<br>From the consumer's perspective, it's a delay.  Yet to be seen if it's fatal.<br>Otherwise, who'd want to use it to develop software?</p></htmltext>
<tokenext>This is being mis-reported or mis-communicated by Intel , I believe.The first version of Larrabee silicon is n't going to consumers , that 's all.From the consumer 's perspective , it 's a delay .
Yet to be seen if it 's fatal.Otherwise , who 'd want to use it to develop software ?</tokentext>
<sentencetext>This is being mis-reported or mis-communicated by Intel, I believe.The first version of Larrabee silicon isn't going to consumers, that's all.From the consumer's perspective, it's a delay.
Yet to be seen if it's fatal.Otherwise, who'd want to use it to develop software?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331868</id>
	<title>Heterogeneous Processors Are Doomed</title>
	<author>Anonymous</author>
	<datestamp>1259938200000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>0</modscore>
	<htmltext><p>The idea that the future of parallel processing somehow rests on the use of a bunch of hybrid cores built on the same die was wrong right out of the gate. If parallel CPU cores are a pain in the ass to program, what makes them think that it will be easier by combining them with a non-compatible type of parallel hardware? The CPU/GPU marriage is a match made in hell and, deep down, Intel knows it. Larrabee was just so much puffery and chest beating, king of the jungle and all that jazz.</p><p>The way to solve the parallel programming crisis is by first acknowledging that last century's computing paradigms are completely inadequate in the age of massive parallelism. It is time to change to the true computing religion and abandon the outmoded worship of the hopelessly flawed Turing Machine.</p><p>Next in line for destruction: AMD's Fusion. You read it here first.</p><p><a href="http://rebelscience.blogspot.com/2008/07/how-to-solve-parallel-programming.html" title="blogspot.com" rel="nofollow">How to Solve the Parallel Programming Crisis</a> [blogspot.com]</p></htmltext>
<tokenext>The idea that the future of parallel processing somehow rests on the use of a bunch of hybrid cores built on the same die was wrong right out of the gate .
If parallel CPU cores are a pain in the ass to program , what makes them think that it will be easier by combining them with a non-compatible type of parallel hardware ?
The CPU/GPU marriage is a match made in hell and , deep down , Intel knows it .
Larrabee was just so much puffery and chest beating , king of the jungle and all that jazz.The way to solve the parallel programming crisis is by first acknowledging that last century 's computing paradigms are completely inadequate in the age of massive parallelism .
It is time to change to the true computing religion and abandon the outmoded worship of the hopelessly flawed Turing Machine.Next in line for destruction : AMD 's Fusion .
You read it here first.How to Solve the Parallel Programming Crisis [ blogspot.com ]</tokentext>
<sentencetext>The idea that the future of parallel processing somehow rests on the use of a bunch of hybrid cores built on the same die was wrong right out of the gate.
If parallel CPU cores are a pain in the ass to program, what makes them think that it will be easier by combining them with a non-compatible type of parallel hardware?
The CPU/GPU marriage is a match made in hell and, deep down, Intel knows it.
Larrabee was just so much puffery and chest beating, king of the jungle and all that jazz.The way to solve the parallel programming crisis is by first acknowledging that last century's computing paradigms are completely inadequate in the age of massive parallelism.
It is time to change to the true computing religion and abandon the outmoded worship of the hopelessly flawed Turing Machine.Next in line for destruction: AMD's Fusion.
You read it here first.How to Solve the Parallel Programming Crisis [blogspot.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30335588</id>
	<title>Re:So the next mini, low end imac and 13" macbook'</title>
	<author>hattig</author>
	<datestamp>1260034020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It's likely that Apple will have to use discrete graphics on all but the lowest-of-the-low (a theoretical $799 MacBook) in order to not regress graphically. NVIDIA GT240 could be an option as a discrete replacement for the integrated 9400M.</p><p>It will require motherboard redesigns, but the CPU will force that anyway. The Intel I/O hub for the new systems is quite small, so there should be room.</p><p>However Apple have regressed graphically in the past (Radeon 9550M -&gt; Intel 2006 rubbish integrated graphics). It wouldn't fit in well with OpenCL and all that stuff that Apple harp on about though.</p></htmltext>
<tokenext>It 's likely that Apple will have to use discrete graphics on all but the lowest-of-the-low ( a theoretical $ 799 MacBook ) in order to not regress graphically .
NVIDIA GT240 could be an option as a discrete replacement for the integrated 9400M.It will require motherboard redesigns , but the CPU will force that anyway .
The Intel I/O hub for the new systems is quite small , so there should be room.However Apple have regressed graphically in the past ( Radeon 9550M - &gt; Intel 2006 rubbish integrated graphics ) .
It would n't fit in well with OpenCL and all that stuff that Apple harp on about though .</tokentext>
<sentencetext>It's likely that Apple will have to use discrete graphics on all but the lowest-of-the-low (a theoretical $799 MacBook) in order to not regress graphically.
NVIDIA GT240 could be an option as a discrete replacement for the integrated 9400M.It will require motherboard redesigns, but the CPU will force that anyway.
The Intel I/O hub for the new systems is quite small, so there should be room.However Apple have regressed graphically in the past (Radeon 9550M -&gt; Intel 2006 rubbish integrated graphics).
It wouldn't fit in well with OpenCL and all that stuff that Apple harp on about though.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331784</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30334160</id>
	<title>Re:Oh rats</title>
	<author>pjbass</author>
	<datestamp>1260017400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I don't play games on my laptop, but I do run compiz-fusion with many of the features enabled.  It's very eye-candy-heavy, and my integrated Intel graphics chip keeps up just fine.  My CPUs don't bear much load at all.  I don't think things are as grossly out of proportion as you make them out to be.  5 years ago, yes.  Today, not so much.</p></htmltext>
<tokenext>I do n't play games on my laptop , but I do run compiz-fusion with many of the features enabled .
It 's very eye-candy-heavy , and my integrated Intel graphics chip keeps up just fine .
My CPUs do n't bear much load at all .
I do n't think things are as grossly out of proportion as you make them out to be .
5 years ago , yes .
Today , not so much .</tokentext>
<sentencetext>I don't play games on my laptop, but I do run compiz-fusion with many of the features enabled.
It's very eye-candy-heavy, and my integrated Intel graphics chip keeps up just fine.
My CPUs don't bear much load at all.
I don't think things are as grossly out of proportion as you make them out to be.
5 years ago, yes.
Today, not so much.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331672</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331580</id>
	<title>Lol at the idiots</title>
	<author>Anonymous</author>
	<datestamp>1259935980000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>So they intend to take a product, who's chief advantage was that it could run old x86 code, and only sell it people who are designing new software? Am I the only one that sees a problem with this?</p></htmltext>
<tokenext>So they intend to take a product , who 's chief advantage was that it could run old x86 code , and only sell it people who are designing new software ?
Am I the only one that sees a problem with this ?</tokentext>
<sentencetext>So they intend to take a product, who's chief advantage was that it could run old x86 code, and only sell it people who are designing new software?
Am I the only one that sees a problem with this?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331648</id>
	<title>Larrabee = Graphics Chip competing w nVidia</title>
	<author>Anonymous</author>
	<datestamp>1259936520000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>In case you've forgotten what a Larrabee was (like I had), it was Intel's planned graphics / vector processing chip, competing with nVidia and AMD / ATI graphics systems.  Here's the <a href="http://en.wikipedia.org/wiki/Larrabee\_(GPU)" title="wikipedia.org" rel="nofollow">Wikipedia article</a> [wikipedia.org].</p></htmltext>
<tokenext>In case you 've forgotten what a Larrabee was ( like I had ) , it was Intel 's planned graphics / vector processing chip , competing with nVidia and AMD / ATI graphics systems .
Here 's the Wikipedia article [ wikipedia.org ] .</tokentext>
<sentencetext>In case you've forgotten what a Larrabee was (like I had), it was Intel's planned graphics / vector processing chip, competing with nVidia and AMD / ATI graphics systems.
Here's the Wikipedia article [wikipedia.org].</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333118</id>
	<title>Re:I disagree</title>
	<author>Anonymous</author>
	<datestamp>1259955060000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>It accelerates the shiny interface in Windows 7 and everything is nice and responsive. For business uses, this is plenty.</p></div><p>Photoshop and other transcoding applications already support GPU acceleration.  Can other apps be far behind?  Where will Intel be with such a backwards product line?  This type of behavior allowed AMD to define the 64 bit instruction set and forced Intel to lag behind.</p><p><div class="quote"><p>Ok well if you do care about games, then you want a discreet graphics solution. Integrated solutions will just never do well. Big reason is memory. You can make your card as fast as you like, if it shares system memory it is severely bottlenecked.</p></div><p>That problem has already been solved.  Also, even motherboards with IGPs now have dedicated frame buffer memory called sideport memory which allows for (some) gaming.  Since this is the first generation of chipsets with dedicated memory, I'm sure this technology will only get better and allow for an improved gaming experience.  Finally, many laptops already have dedicated video memory.</p><p><div class="quote"><p>Also they need to do better with chipsets and motherboards. A big advantage Intel has with regards to the reseller market is that they do their own solutions. Intel will sell you a CPU, chipset and motherboard and they all work together well.</p></div><p>How is this different than AMD and ATI?  You can get an integrated solution from AMD as well that includes CPU, chipset, and an IGP.  There are other reasons why AMD is not competitive in a lot of markets, but having a working, integrated solution isn't one of them.</p></div>
	</htmltext>
<tokenext>It accelerates the shiny interface in Windows 7 and everything is nice and responsive .
For business uses , this is plenty.Photoshop and other transcoding applications already support GPU acceleration .
Can other apps be far behind ?
Where will Intel be with such a backwards product line ?
This type of behavior allowed AMD to define the 64 bit instruction set and forced Intel to lag behind.Ok well if you do care about games , then you want a discreet graphics solution .
Integrated solutions will just never do well .
Big reason is memory .
You can make your card as fast as you like , if it shares system memory it is severely bottlenecked.That problem has already been solved .
Also , even motherboards with IGPs now have dedicated frame buffer memory called sideport memory which allows for ( some ) gaming .
Since this is the first generation of chipsets with dedicated memory , I 'm sure this technology will only get better and allow for an improved gaming experience .
Finally , many laptops already have dedicated video memory.Also they need to do better with chipsets and motherboards .
A big advantage Intel has with regards to the reseller market is that they do their own solutions .
Intel will sell you a CPU , chipset and motherboard and they all work together well.How is this different than AMD and ATI ?
You can get an integrated solution from AMD as well that includes CPU , chipset , and an IGP .
There are other reasons why AMD is not competitive in a lot of markets , but having a working , integrated solution is n't one of them .</tokentext>
<sentencetext>It accelerates the shiny interface in Windows 7 and everything is nice and responsive.
For business uses, this is plenty.Photoshop and other transcoding applications already support GPU acceleration.
Can other apps be far behind?
Where will Intel be with such a backwards product line?
This type of behavior allowed AMD to define the 64 bit instruction set and forced Intel to lag behind.Ok well if you do care about games, then you want a discreet graphics solution.
Integrated solutions will just never do well.
Big reason is memory.
You can make your card as fast as you like, if it shares system memory it is severely bottlenecked.That problem has already been solved.
Also, even motherboards with IGPs now have dedicated frame buffer memory called sideport memory which allows for (some) gaming.
Since this is the first generation of chipsets with dedicated memory, I'm sure this technology will only get better and allow for an improved gaming experience.
Finally, many laptops already have dedicated video memory.Also they need to do better with chipsets and motherboards.
A big advantage Intel has with regards to the reseller market is that they do their own solutions.
Intel will sell you a CPU, chipset and motherboard and they all work together well.How is this different than AMD and ATI?
You can get an integrated solution from AMD as well that includes CPU, chipset, and an IGP.
There are other reasons why AMD is not competitive in a lot of markets, but having a working, integrated solution isn't one of them.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332734</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333532</id>
	<title>Re:I wonder if Bangalore has anything to do with i</title>
	<author>Anonymous</author>
	<datestamp>1260006480000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Intel realized a big problem in computing that they cannot solve easily. The cache coherency is a real problem that cannot be solved simply. But their platform is good for evolutionary computing and I barely see a hardware solution other than Laarabee. the only solution is that each core goes with its memory and you have a distributed approach like grid-computing. Many computational optimization problems can be solved with evolutionary computing and not with a central memory. Data is processed on the agent. Moving small data back and forth from main memory is an overkill. You need PCIe clusters of quad core CPUs (mainly like atom) with their own memory to maintain clock speeds low. For example mpeg4 decoding can be done this way. you sent chunks of continuous frames for decoding to clusters and collect back the results. But you need a high speed interconnect, but I think PCIe  is enough.</p></htmltext>
<tokenext>Intel realized a big problem in computing that they can not solve easily .
The cache coherency is a real problem that can not be solved simply .
But their platform is good for evolutionary computing and I barely see a hardware solution other than Laarabee .
the only solution is that each core goes with its memory and you have a distributed approach like grid-computing .
Many computational optimization problems can be solved with evolutionary computing and not with a central memory .
Data is processed on the agent .
Moving small data back and forth from main memory is an overkill .
You need PCIe clusters of quad core CPUs ( mainly like atom ) with their own memory to maintain clock speeds low .
For example mpeg4 decoding can be done this way .
you sent chunks of continuous frames for decoding to clusters and collect back the results .
But you need a high speed interconnect , but I think PCIe is enough .</tokentext>
<sentencetext>Intel realized a big problem in computing that they cannot solve easily.
The cache coherency is a real problem that cannot be solved simply.
But their platform is good for evolutionary computing and I barely see a hardware solution other than Laarabee.
the only solution is that each core goes with its memory and you have a distributed approach like grid-computing.
Many computational optimization problems can be solved with evolutionary computing and not with a central memory.
Data is processed on the agent.
Moving small data back and forth from main memory is an overkill.
You need PCIe clusters of quad core CPUs (mainly like atom) with their own memory to maintain clock speeds low.
For example mpeg4 decoding can be done this way.
you sent chunks of continuous frames for decoding to clusters and collect back the results.
But you need a high speed interconnect, but I think PCIe  is enough.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332358</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331624</id>
	<title>Re:Oh rats</title>
	<author>Anonymous</author>
	<datestamp>1259936280000</datestamp>
	<modclass>None</modclass>
	<modscore>-1</modscore>
	<htmltext><p>One year ago I said the opposite, but right now it looks like ATI is the only high-end consumer graphics card supplier. I wonder if nvidia is about to throw the towel in high end consumer segment and concentrate in low end + HPC instead.</p></htmltext>
<tokenext>One year ago I said the opposite , but right now it looks like ATI is the only high-end consumer graphics card supplier .
I wonder if nvidia is about to throw the towel in high end consumer segment and concentrate in low end + HPC instead .</tokentext>
<sentencetext>One year ago I said the opposite, but right now it looks like ATI is the only high-end consumer graphics card supplier.
I wonder if nvidia is about to throw the towel in high end consumer segment and concentrate in low end + HPC instead.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331570</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333500</id>
	<title>Re:I wonder if Bangalore has anything to do with i</title>
	<author>RzUpAnmsCwrds</author>
	<datestamp>1260005820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The problem is, a many-core non cache-coherent x86-like system isn't particularly interesting. The big advantage of Larrabee was that you could treat it like a normal SMP system, including (presumably) running standard multithreaded C code on it. Once you have to deal with memory synchronization explicitly, Larrabee starts to look a lot more (from a programming standpoint) like Fermi, Cypress or whatever other Nvidia/ATI GPUs are out at the time.</p><p>There's nothing magic about x86/AMD64 in the HPC world. It's attractive because it is cheap and has good performance. Clusters can, have been, and still are built using POWER and other architectures.</p></htmltext>
<tokenext>The problem is , a many-core non cache-coherent x86-like system is n't particularly interesting .
The big advantage of Larrabee was that you could treat it like a normal SMP system , including ( presumably ) running standard multithreaded C code on it .
Once you have to deal with memory synchronization explicitly , Larrabee starts to look a lot more ( from a programming standpoint ) like Fermi , Cypress or whatever other Nvidia/ATI GPUs are out at the time.There 's nothing magic about x86/AMD64 in the HPC world .
It 's attractive because it is cheap and has good performance .
Clusters can , have been , and still are built using POWER and other architectures .</tokentext>
<sentencetext>The problem is, a many-core non cache-coherent x86-like system isn't particularly interesting.
The big advantage of Larrabee was that you could treat it like a normal SMP system, including (presumably) running standard multithreaded C code on it.
Once you have to deal with memory synchronization explicitly, Larrabee starts to look a lot more (from a programming standpoint) like Fermi, Cypress or whatever other Nvidia/ATI GPUs are out at the time.There's nothing magic about x86/AMD64 in the HPC world.
It's attractive because it is cheap and has good performance.
Clusters can, have been, and still are built using POWER and other architectures.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332358</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331672</id>
	<title>Re:Oh rats</title>
	<author>QuantumRiff</author>
	<datestamp>1259936640000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>I would say ATI AMD are about to become the leader.  Intel is making it more difficult to ship mobile systems without the craptastic intel graphics cards.  Larrabee was supposed to be a decent performance GPU, that would almost be like a co-processor.</p><p>AMD has slightly slower CPU's, but their intgerated graphics blow the snot out of the Intel ones, and are getting even better..  What good is a super fast CPU, if you can't play any games, or even do basic stuff without using the power hungry CPU?</p></htmltext>
<tokenext>I would say ATI AMD are about to become the leader .
Intel is making it more difficult to ship mobile systems without the craptastic intel graphics cards .
Larrabee was supposed to be a decent performance GPU , that would almost be like a co-processor.AMD has slightly slower CPU 's , but their intgerated graphics blow the snot out of the Intel ones , and are getting even better.. What good is a super fast CPU , if you ca n't play any games , or even do basic stuff without using the power hungry CPU ?</tokentext>
<sentencetext>I would say ATI AMD are about to become the leader.
Intel is making it more difficult to ship mobile systems without the craptastic intel graphics cards.
Larrabee was supposed to be a decent performance GPU, that would almost be like a co-processor.AMD has slightly slower CPU's, but their intgerated graphics blow the snot out of the Intel ones, and are getting even better..  What good is a super fast CPU, if you can't play any games, or even do basic stuff without using the power hungry CPU?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331570</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333012</id>
	<title>there's a difference between "all dead" and</title>
	<author>fightinfilipino</author>
	<datestamp>1259953320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>"mostly dead".

maybe Miracle Max has a cure!</htmltext>
<tokenext>" mostly dead " .
maybe Miracle Max has a cure !</tokentext>
<sentencetext>"mostly dead".
maybe Miracle Max has a cure!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332280</id>
	<title>Bad for Linux</title>
	<author>Tailhook</author>
	<datestamp>1259943780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Intel has shown real commitment to supporting their video hardware on Linux with <a href="http://intellinuxgraphics.org/team.html" title="intellinuxgraphics.org">full time staff</a> [intellinuxgraphics.org] employed to produce high quality open source drivers in addition to providing open specifications for (most) of their contemporary hardware.  Unfortunately this hardware provides only limited 3D acceleration.  I was hoping that Larrabee would conflate these two and provide vendor supported, open, high performance accelerated 3D for Linux.</p><p>So much for that happening anytime soon...</p><p>I can't understand why Intel cedes the GPU market to it's competitors.  Have I been getting duped into paying hundreds while everyone else gets free GPUs?  People are paying good money for these chips, right?  NVidia's got Playstation 3 and Apple.  ATI got the 360.  Intel has nothing the the discrete GPU market at all.  Why?  What blocker within Intel prevents them from taking a piece of that pie?</p></htmltext>
<tokenext>Intel has shown real commitment to supporting their video hardware on Linux with full time staff [ intellinuxgraphics.org ] employed to produce high quality open source drivers in addition to providing open specifications for ( most ) of their contemporary hardware .
Unfortunately this hardware provides only limited 3D acceleration .
I was hoping that Larrabee would conflate these two and provide vendor supported , open , high performance accelerated 3D for Linux.So much for that happening anytime soon...I ca n't understand why Intel cedes the GPU market to it 's competitors .
Have I been getting duped into paying hundreds while everyone else gets free GPUs ?
People are paying good money for these chips , right ?
NVidia 's got Playstation 3 and Apple .
ATI got the 360 .
Intel has nothing the the discrete GPU market at all .
Why ? What blocker within Intel prevents them from taking a piece of that pie ?</tokentext>
<sentencetext>Intel has shown real commitment to supporting their video hardware on Linux with full time staff [intellinuxgraphics.org] employed to produce high quality open source drivers in addition to providing open specifications for (most) of their contemporary hardware.
Unfortunately this hardware provides only limited 3D acceleration.
I was hoping that Larrabee would conflate these two and provide vendor supported, open, high performance accelerated 3D for Linux.So much for that happening anytime soon...I can't understand why Intel cedes the GPU market to it's competitors.
Have I been getting duped into paying hundreds while everyone else gets free GPUs?
People are paying good money for these chips, right?
NVidia's got Playstation 3 and Apple.
ATI got the 360.
Intel has nothing the the discrete GPU market at all.
Why?  What blocker within Intel prevents them from taking a piece of that pie?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30334712</id>
	<title>Smart Move</title>
	<author>Anonymous</author>
	<datestamp>1260027000000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>If Intel stays out of the high-end graphic segment for a year or two longer and AMD/ATI keeps hammering nVidia, Intel will be able to acquire nVidia while claiming to preserve or increase competition in the segment.  Apply Intel manufacturing to nVidia designs for a winning product.</p></htmltext>
<tokenext>If Intel stays out of the high-end graphic segment for a year or two longer and AMD/ATI keeps hammering nVidia , Intel will be able to acquire nVidia while claiming to preserve or increase competition in the segment .
Apply Intel manufacturing to nVidia designs for a winning product .</tokentext>
<sentencetext>If Intel stays out of the high-end graphic segment for a year or two longer and AMD/ATI keeps hammering nVidia, Intel will be able to acquire nVidia while claiming to preserve or increase competition in the segment.
Apply Intel manufacturing to nVidia designs for a winning product.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332206</id>
	<title>Re:Heterogeneous Processors Are Doomed</title>
	<author>Anonymous</author>
	<datestamp>1259942940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Thanks. Ever since I quit reading sci.math I have an occasional yearning to read the ill-informed ramblings of a crank.</p></htmltext>
<tokenext>Thanks .
Ever since I quit reading sci.math I have an occasional yearning to read the ill-informed ramblings of a crank .</tokentext>
<sentencetext>Thanks.
Ever since I quit reading sci.math I have an occasional yearning to read the ill-informed ramblings of a crank.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331868</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333568</id>
	<title>Re:So the next mini, low end imac and 13" macbook'</title>
	<author>willy\_me</author>
	<datestamp>1260007140000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>but at least they are dedicated graphics solutions</p></div><p>
Actually, the 9400m is not.  It uses system memory but does a much better job then Intel.  It also acts as the memory controller and does system IO.  The reason for the parent's comments is that all future Intel CPUs will have integrated memory controllers (like the i7 and i5) and an integrated GPU.  Performance will suck but it will make for cheap systems.  This will make it difficult for system builders to make a low end system with good graphic performance as the market for such systems will be small.  The smaller market will reduce the quality/performance of available parts for those system builders - one of which is Apple.
</p></div>
	</htmltext>
<tokenext>but at least they are dedicated graphics solutions Actually , the 9400m is not .
It uses system memory but does a much better job then Intel .
It also acts as the memory controller and does system IO .
The reason for the parent 's comments is that all future Intel CPUs will have integrated memory controllers ( like the i7 and i5 ) and an integrated GPU .
Performance will suck but it will make for cheap systems .
This will make it difficult for system builders to make a low end system with good graphic performance as the market for such systems will be small .
The smaller market will reduce the quality/performance of available parts for those system builders - one of which is Apple .</tokentext>
<sentencetext>but at least they are dedicated graphics solutions
Actually, the 9400m is not.
It uses system memory but does a much better job then Intel.
It also acts as the memory controller and does system IO.
The reason for the parent's comments is that all future Intel CPUs will have integrated memory controllers (like the i7 and i5) and an integrated GPU.
Performance will suck but it will make for cheap systems.
This will make it difficult for system builders to make a low end system with good graphic performance as the market for such systems will be small.
The smaller market will reduce the quality/performance of available parts for those system builders - one of which is Apple.

	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331986</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331890</id>
	<title>Wow... shock horror</title>
	<author>Plasmoid2000ad</author>
	<datestamp>1259938380000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext>I spent most of internship in intel arguing with people hyping larabee as the 2nd coming of jesus that it would never happen...

And now i can finally say HAH!</htmltext>
<tokenext>I spent most of internship in intel arguing with people hyping larabee as the 2nd coming of jesus that it would never happen.. . And now i can finally say HAH !</tokentext>
<sentencetext>I spent most of internship in intel arguing with people hyping larabee as the 2nd coming of jesus that it would never happen...

And now i can finally say HAH!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333666</id>
	<title>Re:Heterogeneous Processors Are Doomed</title>
	<author>Anonymous</author>
	<datestamp>1260008640000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>So, just how deterministic is that parallel Seraphim computer there?</p></htmltext>
<tokenext>So , just how deterministic is that parallel Seraphim computer there ?</tokentext>
<sentencetext>So, just how deterministic is that parallel Seraphim computer there?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332180</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332372</id>
	<title>Re:Wow... shock horror</title>
	<author>Anonymous</author>
	<datestamp>1259944980000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Oh, you are one of those smart interns, who come in and then tell architects who have been running simulations for years and collecting results, that hey, this is not going to work, without any data of your own, just from your intution?<br>Are you, instead of larabee, the second coming of Jesus?</p></htmltext>
<tokenext>Oh , you are one of those smart interns , who come in and then tell architects who have been running simulations for years and collecting results , that hey , this is not going to work , without any data of your own , just from your intution ? Are you , instead of larabee , the second coming of Jesus ?</tokentext>
<sentencetext>Oh, you are one of those smart interns, who come in and then tell architects who have been running simulations for years and collecting results, that hey, this is not going to work, without any data of your own, just from your intution?Are you, instead of larabee, the second coming of Jesus?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331890</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332668</id>
	<title>nvidia + intel??</title>
	<author>Anonymous</author>
	<datestamp>1259948460000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Is this a precursor to some nvidia/intel alliance?<br>It's a shame this isn't going to happen.  If anything this would have kicked wide open the video market with a known GPU instruction set. We may well be doomed to proprietary driver hell with system stability becomming more and more reliant on the proficiency of nvidia/amd.  For linux users they are the weak point in system stability.</p></htmltext>
<tokenext>Is this a precursor to some nvidia/intel alliance ? It 's a shame this is n't going to happen .
If anything this would have kicked wide open the video market with a known GPU instruction set .
We may well be doomed to proprietary driver hell with system stability becomming more and more reliant on the proficiency of nvidia/amd .
For linux users they are the weak point in system stability .</tokentext>
<sentencetext>Is this a precursor to some nvidia/intel alliance?It's a shame this isn't going to happen.
If anything this would have kicked wide open the video market with a known GPU instruction set.
We may well be doomed to proprietary driver hell with system stability becomming more and more reliant on the proficiency of nvidia/amd.
For linux users they are the weak point in system stability.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332082</id>
	<title>oh lord why!</title>
	<author>Anonymous</author>
	<datestamp>1259940900000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>This is wholy depressing.<br>Open source graphics stack is disheartening to say the least and the kms/gallium architecture is probably 1.5 years<br>from delivering on it's promise to optimize open graphics.<br>I was hoping that larrabee would at least motivate ATI to put some real man-power behind<br>their half-hearted support for the xf86-video-ati driver.</p><p>This is almost sad enough to make me run to nvidia with my wallet wide open!</p></htmltext>
<tokenext>This is wholy depressing.Open source graphics stack is disheartening to say the least and the kms/gallium architecture is probably 1.5 yearsfrom delivering on it 's promise to optimize open graphics.I was hoping that larrabee would at least motivate ATI to put some real man-power behindtheir half-hearted support for the xf86-video-ati driver.This is almost sad enough to make me run to nvidia with my wallet wide open !</tokentext>
<sentencetext>This is wholy depressing.Open source graphics stack is disheartening to say the least and the kms/gallium architecture is probably 1.5 yearsfrom delivering on it's promise to optimize open graphics.I was hoping that larrabee would at least motivate ATI to put some real man-power behindtheir half-hearted support for the xf86-video-ati driver.This is almost sad enough to make me run to nvidia with my wallet wide open!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332358</id>
	<title>I wonder if Bangalore has anything to do with it.</title>
	<author>Anonymous</author>
	<datestamp>1259944740000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>I think the announcement of the 48-core <a href="http://slashdot.org/story/09/12/02/215207/Intel-Shows-48-Core-x86-Processor" title="slashdot.org">Intel 'Bangalore' chip</a> [slashdot.org] just recently is not a coincidence.</p><p>When I first read about the Larrabee chip, I thought the decision to make it a cache coherent SMP chip to be simply insane - architectures like that are <i>very</i> difficult to scale, as the inter-core chatter scales roughly as the factorial of the number of cores. Remember how Larrabee was designed around a really wide 1024-bit ring bus? I bet that's required because otherwise the cores would spend all of their time trying to synchronize between each other.</p><p>So, Larrabee is effectively cancelled, but only a day or two before Intel announced an almost identical sounding part without cache-coherence! It sounds to me like they've given up on the 100\% x86 compatibility, and realised that a chip with some extra instructions around explicit software controlled memory synchronization and message passing would scale <i>way</i> better. Without cache coherence, a "many core" chip is basically just an independent unit repeated over and over, so scalability should be almost infinite, and wouldn't require design changes for different sizes. That sounds like a much better match for a graphics processor.</p><p>While Intel kept their cards relatively close to their chest, from all of the presentations I've seen, no first-gen Larrabee chip could scale beyond 24 cores even with a 1024 bit bus, while the new Bangalore chip starts at 48 cores. There's no public info on how many lanes Bangalore has in its on-chip bus but based on the bandwidth of its 80 core experimental predecessor, I'm guessing it's either 32-bit or 64-bit (per core).</p></htmltext>
<tokenext>I think the announcement of the 48-core Intel 'Bangalore ' chip [ slashdot.org ] just recently is not a coincidence.When I first read about the Larrabee chip , I thought the decision to make it a cache coherent SMP chip to be simply insane - architectures like that are very difficult to scale , as the inter-core chatter scales roughly as the factorial of the number of cores .
Remember how Larrabee was designed around a really wide 1024-bit ring bus ?
I bet that 's required because otherwise the cores would spend all of their time trying to synchronize between each other.So , Larrabee is effectively cancelled , but only a day or two before Intel announced an almost identical sounding part without cache-coherence !
It sounds to me like they 've given up on the 100 \ % x86 compatibility , and realised that a chip with some extra instructions around explicit software controlled memory synchronization and message passing would scale way better .
Without cache coherence , a " many core " chip is basically just an independent unit repeated over and over , so scalability should be almost infinite , and would n't require design changes for different sizes .
That sounds like a much better match for a graphics processor.While Intel kept their cards relatively close to their chest , from all of the presentations I 've seen , no first-gen Larrabee chip could scale beyond 24 cores even with a 1024 bit bus , while the new Bangalore chip starts at 48 cores .
There 's no public info on how many lanes Bangalore has in its on-chip bus but based on the bandwidth of its 80 core experimental predecessor , I 'm guessing it 's either 32-bit or 64-bit ( per core ) .</tokentext>
<sentencetext>I think the announcement of the 48-core Intel 'Bangalore' chip [slashdot.org] just recently is not a coincidence.When I first read about the Larrabee chip, I thought the decision to make it a cache coherent SMP chip to be simply insane - architectures like that are very difficult to scale, as the inter-core chatter scales roughly as the factorial of the number of cores.
Remember how Larrabee was designed around a really wide 1024-bit ring bus?
I bet that's required because otherwise the cores would spend all of their time trying to synchronize between each other.So, Larrabee is effectively cancelled, but only a day or two before Intel announced an almost identical sounding part without cache-coherence!
It sounds to me like they've given up on the 100\% x86 compatibility, and realised that a chip with some extra instructions around explicit software controlled memory synchronization and message passing would scale way better.
Without cache coherence, a "many core" chip is basically just an independent unit repeated over and over, so scalability should be almost infinite, and wouldn't require design changes for different sizes.
That sounds like a much better match for a graphics processor.While Intel kept their cards relatively close to their chest, from all of the presentations I've seen, no first-gen Larrabee chip could scale beyond 24 cores even with a 1024 bit bus, while the new Bangalore chip starts at 48 cores.
There's no public info on how many lanes Bangalore has in its on-chip bus but based on the bandwidth of its 80 core experimental predecessor, I'm guessing it's either 32-bit or 64-bit (per core).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332738</id>
	<title>AMD Patents</title>
	<author>Anonymous</author>
	<datestamp>1259949480000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Now that Intel has full use of AMDs ATI graphics patents I'm not surprised they have dumped Larrabee.... I would expect to see a new GPU product announcement from them next year that is similar to AMDs offerings....</p></htmltext>
<tokenext>Now that Intel has full use of AMDs ATI graphics patents I 'm not surprised they have dumped Larrabee.... I would expect to see a new GPU product announcement from them next year that is similar to AMDs offerings... .</tokentext>
<sentencetext>Now that Intel has full use of AMDs ATI graphics patents I'm not surprised they have dumped Larrabee.... I would expect to see a new GPU product announcement from them next year that is similar to AMDs offerings....</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331658</id>
	<title>Does Sarah O'Connor has anything to do with it ?</title>
	<author>Anonymous</author>
	<datestamp>1259936520000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Too bad, Larrabee looked like the next thing.</p></htmltext>
<tokenext>Too bad , Larrabee looked like the next thing .</tokentext>
<sentencetext>Too bad, Larrabee looked like the next thing.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30334082</id>
	<title>Anonymous Coward</title>
	<author>Anonymous</author>
	<datestamp>1260015900000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Hmmm. <a href="http://arstechnica.com/hardware/news/2009/11/end-of-the-line-for-ibms-cell.ars" title="arstechnica.com" rel="nofollow">IBM's cell processor was killed two weeks ago. </a> [arstechnica.com]<nobr> <wbr></nobr>...  coincidence? Intel dragged along enough HPC customers to get the the Cell processor out of the market. Mission Accomplished. Itanium made promises it didn't keep for 5+ years, but the promises of Intel alone were enough to kill Sparc and Alpha. Intel's MO is to promise just enough to kill the competition without having to deliver a viable product until well into the future.</p></htmltext>
<tokenext>Hmmm .
IBM 's cell processor was killed two weeks ago .
[ arstechnica.com ] ... coincidence ? Intel dragged along enough HPC customers to get the the Cell processor out of the market .
Mission Accomplished .
Itanium made promises it did n't keep for 5 + years , but the promises of Intel alone were enough to kill Sparc and Alpha .
Intel 's MO is to promise just enough to kill the competition without having to deliver a viable product until well into the future .</tokentext>
<sentencetext>Hmmm.
IBM's cell processor was killed two weeks ago.
[arstechnica.com] ...  coincidence? Intel dragged along enough HPC customers to get the the Cell processor out of the market.
Mission Accomplished.
Itanium made promises it didn't keep for 5+ years, but the promises of Intel alone were enough to kill Sparc and Alpha.
Intel's MO is to promise just enough to kill the competition without having to deliver a viable product until well into the future.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332578</id>
	<title>Re:Heterogeneous Processors Are Doomed</title>
	<author>Anonymous</author>
	<datestamp>1259947680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>except that Larrabee was supposed to be a grid of x86-64 CPUs. Pretty homogenous compared with the host CPU. All that was needed is OS support</p></htmltext>
<tokenext>except that Larrabee was supposed to be a grid of x86-64 CPUs .
Pretty homogenous compared with the host CPU .
All that was needed is OS support</tokentext>
<sentencetext>except that Larrabee was supposed to be a grid of x86-64 CPUs.
Pretty homogenous compared with the host CPU.
All that was needed is OS support</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331868</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331596</id>
	<title>FIRST POST!!!</title>
	<author>Anonymous</author>
	<datestamp>1259936100000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><p>So long, suckers!</p></htmltext>
<tokenext>So long , suckers !</tokentext>
<sentencetext>So long, suckers!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332030</id>
	<title>Re:So the next mini, low end imac and 13" macbook'</title>
	<author>Anonymous</author>
	<datestamp>1259940120000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Marketroids buy parts for systems, not engineers.  Apple has a contract with Intel and they will continue to buy from Intel until the profit margins shrink.</p></htmltext>
<tokenext>Marketroids buy parts for systems , not engineers .
Apple has a contract with Intel and they will continue to buy from Intel until the profit margins shrink .</tokentext>
<sentencetext>Marketroids buy parts for systems, not engineers.
Apple has a contract with Intel and they will continue to buy from Intel until the profit margins shrink.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331784</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331772</id>
	<title>Re:Great, just in time for Duke Nukem Forever!</title>
	<author>Anonymous</author>
	<datestamp>1259937420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>it's old dude. give it a rest.</htmltext>
<tokenext>it 's old dude .
give it a rest .</tokentext>
<sentencetext>it's old dude.
give it a rest.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331662</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332404</id>
	<title>If you are told that you ....</title>
	<author>Anonymous</author>
	<datestamp>1259945280000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>will be working on a graphics chip project at Intel then:</p><p>- You know someone in management hates you<br>- You need to move your cube to a different floor<br>- You don't go to any meetings and if you do look like shit and fall asleep often<br>- Your career will be forever tarnished<br>- You will never get those 18 months back</p><p>and last but not least -- You know you shouldn't have put that whoopee cushion on Paul Ottelini's chair</p></htmltext>
<tokenext>will be working on a graphics chip project at Intel then : - You know someone in management hates you- You need to move your cube to a different floor- You do n't go to any meetings and if you do look like shit and fall asleep often- Your career will be forever tarnished- You will never get those 18 months backand last but not least -- You know you should n't have put that whoopee cushion on Paul Ottelini 's chair</tokentext>
<sentencetext>will be working on a graphics chip project at Intel then:- You know someone in management hates you- You need to move your cube to a different floor- You don't go to any meetings and if you do look like shit and fall asleep often- Your career will be forever tarnished- You will never get those 18 months backand last but not least -- You know you shouldn't have put that whoopee cushion on Paul Ottelini's chair</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331586</id>
	<title>In other words...</title>
	<author>sznupi</author>
	<datestamp>1259936040000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>A nicer way of saying:</p><p><div class="quote"><p>Uhm, guys, remember how we were supposed to ship a year ago and said recently we will ship a year from now? Well, add 5 to that now...but we will provide and totally kick ass, promise.</p></div></div>
	</htmltext>
<tokenext>A nicer way of saying : Uhm , guys , remember how we were supposed to ship a year ago and said recently we will ship a year from now ?
Well , add 5 to that now...but we will provide and totally kick ass , promise .</tokentext>
<sentencetext>A nicer way of saying:Uhm, guys, remember how we were supposed to ship a year ago and said recently we will ship a year from now?
Well, add 5 to that now...but we will provide and totally kick ass, promise.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333252</id>
	<title>mod parent up</title>
	<author>Billly Gates</author>
	<datestamp>1260043920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>My wife and I play wow but most users prefer to use a wii or ps3 if they want to play games.</p><p>Its frustrating and I agree that the intel chipsets and integrated chips (not true video cards) put desktops 5 - 6 years behind and piss off game developers forcing them to port only to consoles.</p><p>The netbook phenomena shows this trend for slim boring graphics that are cheap cheap and uh cheap.</p><p>Most game developers have left the pc as a result due to angry kids whose parents get a nice i945 graphics chipset computer for them and they wonder why Crysis is a slide show.</p></htmltext>
<tokenext>My wife and I play wow but most users prefer to use a wii or ps3 if they want to play games.Its frustrating and I agree that the intel chipsets and integrated chips ( not true video cards ) put desktops 5 - 6 years behind and piss off game developers forcing them to port only to consoles.The netbook phenomena shows this trend for slim boring graphics that are cheap cheap and uh cheap.Most game developers have left the pc as a result due to angry kids whose parents get a nice i945 graphics chipset computer for them and they wonder why Crysis is a slide show .</tokentext>
<sentencetext>My wife and I play wow but most users prefer to use a wii or ps3 if they want to play games.Its frustrating and I agree that the intel chipsets and integrated chips (not true video cards) put desktops 5 - 6 years behind and piss off game developers forcing them to port only to consoles.The netbook phenomena shows this trend for slim boring graphics that are cheap cheap and uh cheap.Most game developers have left the pc as a result due to angry kids whose parents get a nice i945 graphics chipset computer for them and they wonder why Crysis is a slide show.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332734</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333428</id>
	<title>Larrabee?</title>
	<author>dangitman</author>
	<datestamp>1260004020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Sounds like a character played by Rodney Dangerfield in a teen grope movie.</htmltext>
<tokenext>Sounds like a character played by Rodney Dangerfield in a teen grope movie .</tokentext>
<sentencetext>Sounds like a character played by Rodney Dangerfield in a teen grope movie.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332734</id>
	<title>I disagree</title>
	<author>Sycraft-fu</author>
	<datestamp>1259949480000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><p>Many people really don't care about their graphics card. If you don't do games, an Intel chipset graphics unit works fine. It accelerates the shiny interface in Windows 7 and everything is nice and responsive. For business uses, this is plenty.</p><p>Ok well if you do care about games, then you want a discreet graphics solution. Integrated solutions will just never do well. Big reason is memory. You can make your card as fast as you like, if it shares system memory it is severely bottlenecked. Graphics cards needs their own dedicated high speed memory to perform well.</p><p>As such I just don't see ATi having a slightly better integrated solution as something people will care much about. The bigger question is who makes the better CPUs that that is firmly in Intel's arena. Their CPUs are faster, and can be lower power. So regardless of if you want a power saving app or a performance solution, they've got a good chip.</p><p>AMD really has to get their chips up to snuff before they'll start competing with Intel more. They don't have to beat Intel at everything, but they need to have at least one area they are better for and they really don't seem to. Also they need to do better with chipsets and motherboards. A big advantage Intel has with regards to the reseller market is that they do their own solutions. Intel will sell you a CPU, chipset and motherboard and they all work together well. OEMs like this, cuts down on supply chain problems and problems of vendors blaming each other when there's trouble.</p><p>This has also historically been a weakpoint for AMD. I remember when their Athlons came out and there was no question, they beat the P3's price/performance ratio. They were the kings of the hill. I bought one... and returned it two weeks later. The reason? Chipsets. I could not get a chipset that would work with my GeForce 256 properly. They had poor regulation of the AGP signal and it just wouldn't work. Bought an Intel chip/board and it worked flawlessly the first time.</p><p>So when AMD has a good CPU/chipset/mobo combo and CPUs competitive with Intel in at least one arena, I think maybe they'll make gains. Until then, I think they'll mainly be relegated to "cheap brands" and to enthusiast BYO systems.</p></htmltext>
<tokenext>Many people really do n't care about their graphics card .
If you do n't do games , an Intel chipset graphics unit works fine .
It accelerates the shiny interface in Windows 7 and everything is nice and responsive .
For business uses , this is plenty.Ok well if you do care about games , then you want a discreet graphics solution .
Integrated solutions will just never do well .
Big reason is memory .
You can make your card as fast as you like , if it shares system memory it is severely bottlenecked .
Graphics cards needs their own dedicated high speed memory to perform well.As such I just do n't see ATi having a slightly better integrated solution as something people will care much about .
The bigger question is who makes the better CPUs that that is firmly in Intel 's arena .
Their CPUs are faster , and can be lower power .
So regardless of if you want a power saving app or a performance solution , they 've got a good chip.AMD really has to get their chips up to snuff before they 'll start competing with Intel more .
They do n't have to beat Intel at everything , but they need to have at least one area they are better for and they really do n't seem to .
Also they need to do better with chipsets and motherboards .
A big advantage Intel has with regards to the reseller market is that they do their own solutions .
Intel will sell you a CPU , chipset and motherboard and they all work together well .
OEMs like this , cuts down on supply chain problems and problems of vendors blaming each other when there 's trouble.This has also historically been a weakpoint for AMD .
I remember when their Athlons came out and there was no question , they beat the P3 's price/performance ratio .
They were the kings of the hill .
I bought one... and returned it two weeks later .
The reason ?
Chipsets. I could not get a chipset that would work with my GeForce 256 properly .
They had poor regulation of the AGP signal and it just would n't work .
Bought an Intel chip/board and it worked flawlessly the first time.So when AMD has a good CPU/chipset/mobo combo and CPUs competitive with Intel in at least one arena , I think maybe they 'll make gains .
Until then , I think they 'll mainly be relegated to " cheap brands " and to enthusiast BYO systems .</tokentext>
<sentencetext>Many people really don't care about their graphics card.
If you don't do games, an Intel chipset graphics unit works fine.
It accelerates the shiny interface in Windows 7 and everything is nice and responsive.
For business uses, this is plenty.Ok well if you do care about games, then you want a discreet graphics solution.
Integrated solutions will just never do well.
Big reason is memory.
You can make your card as fast as you like, if it shares system memory it is severely bottlenecked.
Graphics cards needs their own dedicated high speed memory to perform well.As such I just don't see ATi having a slightly better integrated solution as something people will care much about.
The bigger question is who makes the better CPUs that that is firmly in Intel's arena.
Their CPUs are faster, and can be lower power.
So regardless of if you want a power saving app or a performance solution, they've got a good chip.AMD really has to get their chips up to snuff before they'll start competing with Intel more.
They don't have to beat Intel at everything, but they need to have at least one area they are better for and they really don't seem to.
Also they need to do better with chipsets and motherboards.
A big advantage Intel has with regards to the reseller market is that they do their own solutions.
Intel will sell you a CPU, chipset and motherboard and they all work together well.
OEMs like this, cuts down on supply chain problems and problems of vendors blaming each other when there's trouble.This has also historically been a weakpoint for AMD.
I remember when their Athlons came out and there was no question, they beat the P3's price/performance ratio.
They were the kings of the hill.
I bought one... and returned it two weeks later.
The reason?
Chipsets. I could not get a chipset that would work with my GeForce 256 properly.
They had poor regulation of the AGP signal and it just wouldn't work.
Bought an Intel chip/board and it worked flawlessly the first time.So when AMD has a good CPU/chipset/mobo combo and CPUs competitive with Intel in at least one arena, I think maybe they'll make gains.
Until then, I think they'll mainly be relegated to "cheap brands" and to enthusiast BYO systems.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331672</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332296</id>
	<title>Re:So the next mini, low end imac and 13" macbook'</title>
	<author>Anonymous</author>
	<datestamp>1259943960000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>A better question is when will AMD come out with a competitive mobile platform, because Apple sure as hell would never use their current stuff.</p></htmltext>
<tokenext>A better question is when will AMD come out with a competitive mobile platform , because Apple sure as hell would never use their current stuff .</tokentext>
<sentencetext>A better question is when will AMD come out with a competitive mobile platform, because Apple sure as hell would never use their current stuff.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331784</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30334270</id>
	<title>agents  of change</title>
	<author>Anonymous</author>
	<datestamp>1260019260000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Chief: Max! They  killed Larabee!</p><p>Agent 86: Sorry about that Chief</p></htmltext>
<tokenext>Chief : Max !
They killed Larabee ! Agent 86 : Sorry about that Chief</tokentext>
<sentencetext>Chief: Max!
They  killed Larabee!Agent 86: Sorry about that Chief</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331986</id>
	<title>Re:So the next mini, low end imac and 13" macbook'</title>
	<author>jasonwc</author>
	<datestamp>1259939580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I'm not sure what you're referring to. Macbook and Macbook Pros are configured with Nvidia 9400M or 9600M chipsets. They may not be powerful but at least they are dedicated graphics solutions. Far superior to Intel Integrated graphics, and they provide working hardware acceleration for H.264/VC-1. The Intel G45 chipset does so - but only with MPC-HC - not for commercial blu-ray playback - and it had some corruption last I checked.</htmltext>
<tokenext>I 'm not sure what you 're referring to .
Macbook and Macbook Pros are configured with Nvidia 9400M or 9600M chipsets .
They may not be powerful but at least they are dedicated graphics solutions .
Far superior to Intel Integrated graphics , and they provide working hardware acceleration for H.264/VC-1 .
The Intel G45 chipset does so - but only with MPC-HC - not for commercial blu-ray playback - and it had some corruption last I checked .</tokentext>
<sentencetext>I'm not sure what you're referring to.
Macbook and Macbook Pros are configured with Nvidia 9400M or 9600M chipsets.
They may not be powerful but at least they are dedicated graphics solutions.
Far superior to Intel Integrated graphics, and they provide working hardware acceleration for H.264/VC-1.
The Intel G45 chipset does so - but only with MPC-HC - not for commercial blu-ray playback - and it had some corruption last I checked.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331784</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333238</id>
	<title>Re:Wow... shock horror</title>
	<author>Anonymous</author>
	<datestamp>1260043740000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Congrats to your fantastic insight even as an intern in going against decades of experience and turning out to be right. You must be really smart. Or at least as lucky as a coin tosser.</htmltext>
<tokenext>Congrats to your fantastic insight even as an intern in going against decades of experience and turning out to be right .
You must be really smart .
Or at least as lucky as a coin tosser .</tokentext>
<sentencetext>Congrats to your fantastic insight even as an intern in going against decades of experience and turning out to be right.
You must be really smart.
Or at least as lucky as a coin tosser.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331890</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331570</id>
	<title>Oh rats</title>
	<author>Anonymous</author>
	<datestamp>1259935980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>No consumer version means this will turn into another i860. I guess ATI will remain the only viable competitor to NVIDIA then.</htmltext>
<tokenext>No consumer version means this will turn into another i860 .
I guess ATI will remain the only viable competitor to NVIDIA then .</tokentext>
<sentencetext>No consumer version means this will turn into another i860.
I guess ATI will remain the only viable competitor to NVIDIA then.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331784</id>
	<title>So the next mini, low end imac and 13" macbook's w</title>
	<author>Anonymous</author>
	<datestamp>1259937480000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>So the next mini, low end imac and 13" macbook's will be stuck with shit video and the mac pro will start at $3000 with 6 core cpus.</p><p>Will apple move to amd just to get better video in low end systems?</p></htmltext>
<tokenext>So the next mini , low end imac and 13 " macbook 's will be stuck with shit video and the mac pro will start at $ 3000 with 6 core cpus.Will apple move to amd just to get better video in low end systems ?</tokentext>
<sentencetext>So the next mini, low end imac and 13" macbook's will be stuck with shit video and the mac pro will start at $3000 with 6 core cpus.Will apple move to amd just to get better video in low end systems?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331662</id>
	<title>Great, just in time for Duke Nukem Forever!</title>
	<author>WoTG</author>
	<datestamp>1259936580000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext>Hmm... I think Intel's plan is for Larrabee GPU's to launch at the same time as Duke Nukem Forever!<nobr> <wbr></nobr>:)</htmltext>
<tokenext>Hmm... I think Intel 's plan is for Larrabee GPU 's to launch at the same time as Duke Nukem Forever !
: )</tokentext>
<sentencetext>Hmm... I think Intel's plan is for Larrabee GPU's to launch at the same time as Duke Nukem Forever!
:)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331740</id>
	<title>Re:Oh rats</title>
	<author>Anonymous</author>
	<datestamp>1259937120000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>Barely.</p><p>ATi is only as viable as the monopolist Nvidia will let them be.</p><p>ATi has been very fortunate so far but they've just squeaked by in some years.</p><p>The problem is an uneducated populace... they keep rewarding anti-competitive and unethical behavior by continuing to purchase products from the monopolist.</p><p>Hopefully the thrice-convicted monopolist (intel) will beat the snot out of nvidia (recent lawsuit) and thereby reducing their effectiveness as a bully in the video card market.</p><p>Fighting fire with fire sometimes is the only way when no other solution is provided.</p></htmltext>
<tokenext>Barely.ATi is only as viable as the monopolist Nvidia will let them be.ATi has been very fortunate so far but they 've just squeaked by in some years.The problem is an uneducated populace... they keep rewarding anti-competitive and unethical behavior by continuing to purchase products from the monopolist.Hopefully the thrice-convicted monopolist ( intel ) will beat the snot out of nvidia ( recent lawsuit ) and thereby reducing their effectiveness as a bully in the video card market.Fighting fire with fire sometimes is the only way when no other solution is provided .</tokentext>
<sentencetext>Barely.ATi is only as viable as the monopolist Nvidia will let them be.ATi has been very fortunate so far but they've just squeaked by in some years.The problem is an uneducated populace... they keep rewarding anti-competitive and unethical behavior by continuing to purchase products from the monopolist.Hopefully the thrice-convicted monopolist (intel) will beat the snot out of nvidia (recent lawsuit) and thereby reducing their effectiveness as a bully in the video card market.Fighting fire with fire sometimes is the only way when no other solution is provided.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331570</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331970</id>
	<title>Intel Inside...</title>
	<author>Anonymous</author>
	<datestamp>1259939460000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Intel insiders have seen this coming.  <a href="http://www.intel.com/pressroom/kits/bios/perlmutt.htm" title="intel.com" rel="nofollow">Dadi</a> [intel.com] won. Three strikes and you're out for <a href="http://www.emc.com/about/emc-at-glance/exec-team/gelsinger.htm" title="emc.com" rel="nofollow">Pat</a> [emc.com].<br>1. Itanium<br>2. Pentium 4<br>3. Larrabee</p><p>Fortunately for the guys in Hillsboro, Nehalem is a glowing success.</p></htmltext>
<tokenext>Intel insiders have seen this coming .
Dadi [ intel.com ] won .
Three strikes and you 're out for Pat [ emc.com ] .1 .
Itanium2. Pentium 43 .
LarrabeeFortunately for the guys in Hillsboro , Nehalem is a glowing success .</tokentext>
<sentencetext>Intel insiders have seen this coming.
Dadi [intel.com] won.
Three strikes and you're out for Pat [emc.com].1.
Itanium2. Pentium 43.
LarrabeeFortunately for the guys in Hillsboro, Nehalem is a glowing success.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332600</id>
	<title>Re:Larrabee = Graphics Chip competing w nVidia</title>
	<author>moosesocks</author>
	<datestamp>1259947800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I'm not one for conspiracy theories, although I wouldn't be terribly shocked if Intel surprised everybody and launched Larrabee a few months after AMD releases a competing product.</p><p>In the past, Intel's deliberately stifled product development and engaged in anticompetitive behaviors that would even make Microsoft look twice (and has been found guilty and forced to pay up to this extent).  Remember how quickly Intel brought consumer x86-64 chips to market after AMD proved that the platform was technically and commercially viable?</p><p>Of course, this may be giving Intel too much credit -- the success of the 'Core' series was essentially a whole lot of luck -- Itanium and Pentium 4 were always planned to be "the way forward" for the company.  When neither panned out, the company was able to fall back on its low-power mobile platform, which turned out to scale remarkably well, despite having its origins in a much older architecture.</p></htmltext>
<tokenext>I 'm not one for conspiracy theories , although I would n't be terribly shocked if Intel surprised everybody and launched Larrabee a few months after AMD releases a competing product.In the past , Intel 's deliberately stifled product development and engaged in anticompetitive behaviors that would even make Microsoft look twice ( and has been found guilty and forced to pay up to this extent ) .
Remember how quickly Intel brought consumer x86-64 chips to market after AMD proved that the platform was technically and commercially viable ? Of course , this may be giving Intel too much credit -- the success of the 'Core ' series was essentially a whole lot of luck -- Itanium and Pentium 4 were always planned to be " the way forward " for the company .
When neither panned out , the company was able to fall back on its low-power mobile platform , which turned out to scale remarkably well , despite having its origins in a much older architecture .</tokentext>
<sentencetext>I'm not one for conspiracy theories, although I wouldn't be terribly shocked if Intel surprised everybody and launched Larrabee a few months after AMD releases a competing product.In the past, Intel's deliberately stifled product development and engaged in anticompetitive behaviors that would even make Microsoft look twice (and has been found guilty and forced to pay up to this extent).
Remember how quickly Intel brought consumer x86-64 chips to market after AMD proved that the platform was technically and commercially viable?Of course, this may be giving Intel too much credit -- the success of the 'Core' series was essentially a whole lot of luck -- Itanium and Pentium 4 were always planned to be "the way forward" for the company.
When neither panned out, the company was able to fall back on its low-power mobile platform, which turned out to scale remarkably well, despite having its origins in a much older architecture.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331648</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_05_005234_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333568
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331986
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331784
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_05_005234_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331624
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331570
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_05_005234_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332206
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331868
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_05_005234_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332296
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331784
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_05_005234_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333118
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332734
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331672
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331570
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_05_005234_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331740
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331570
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_05_005234_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332600
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331648
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_05_005234_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332372
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331890
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_05_005234_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333666
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332180
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331868
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_05_005234_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333252
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332734
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331672
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331570
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_05_005234_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333682
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331890
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_05_005234_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333696
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331672
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331570
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_05_005234_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331772
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331662
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_05_005234_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333500
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332358
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_05_005234_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331648
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_05_005234_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30335588
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331784
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_05_005234_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332442
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331672
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331570
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_05_005234_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332030
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331784
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_05_005234_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30334862
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331784
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_05_005234_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333238
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331890
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_05_005234_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332474
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331672
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331570
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_05_005234_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30334160
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331672
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331570
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_05_005234_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332578
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331868
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_05_005234_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333532
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332358
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_05_005234.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331890
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332372
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333682
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333238
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_05_005234.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332358
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333500
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333532
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_05_005234.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332280
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_05_005234.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332144
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_05_005234.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331570
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331672
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332734
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333118
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333252
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333696
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332474
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332442
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30334160
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331624
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331740
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_05_005234.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331868
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332206
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332578
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332180
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333666
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_05_005234.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331784
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331986
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333568
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30334862
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30335588
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332030
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332296
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_05_005234.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331580
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_05_005234.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332738
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_05_005234.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331596
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_05_005234.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331662
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331772
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_05_005234.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331658
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_05_005234.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331648
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30333560
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30332600
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_05_005234.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_05_005234.30331586
</commentlist>
</conversation>
