<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article10_03_08_2049235</id>
	<title>8-Core Intel Nehalem-EX To Launch This Month</title>
	<author>Soulskill</author>
	<datestamp>1268040540000</datestamp>
	<htmltext><a href="http://hothardware.com/" rel="nofollow">MojoKid</a> writes <i>"What could you do with 8 physical cores of CPU processing power?  <a href="http://hothardware.com/News/Buckle-Up-Intel-Preps-8Core-NehalemEX-Chips-for-March-Launch/">Intel's upcoming 8-core Nehalem-EX is launching later this month</a>, according to Intel Xeon Platform Director Shannon Poulin. The announcement puts to rest rumors that the 8-core part might be delayed, and makes good on a promise Intel made last year when the chip maker said it would release the chip in the first half of 2010.  To quickly recap, Nehalem-EX boasts an extensive feature-set, including up to 8 cores per processor, up to 16 threads per processor with Intel Hyper-threading, scalability up to eight sockets via Intel's serial Quick Path Interconnect and more with third-party node controllers, and 24MB of shared cache."</i></htmltext>
<tokenext>MojoKid writes " What could you do with 8 physical cores of CPU processing power ?
Intel 's upcoming 8-core Nehalem-EX is launching later this month , according to Intel Xeon Platform Director Shannon Poulin .
The announcement puts to rest rumors that the 8-core part might be delayed , and makes good on a promise Intel made last year when the chip maker said it would release the chip in the first half of 2010 .
To quickly recap , Nehalem-EX boasts an extensive feature-set , including up to 8 cores per processor , up to 16 threads per processor with Intel Hyper-threading , scalability up to eight sockets via Intel 's serial Quick Path Interconnect and more with third-party node controllers , and 24MB of shared cache .
"</tokentext>
<sentencetext>MojoKid writes "What could you do with 8 physical cores of CPU processing power?
Intel's upcoming 8-core Nehalem-EX is launching later this month, according to Intel Xeon Platform Director Shannon Poulin.
The announcement puts to rest rumors that the 8-core part might be delayed, and makes good on a promise Intel made last year when the chip maker said it would release the chip in the first half of 2010.
To quickly recap, Nehalem-EX boasts an extensive feature-set, including up to 8 cores per processor, up to 16 threads per processor with Intel Hyper-threading, scalability up to eight sockets via Intel's serial Quick Path Interconnect and more with third-party node controllers, and 24MB of shared cache.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31411502</id>
	<title>Fuck everything, we're doing eight cores</title>
	<author>Anonymous</author>
	<datestamp>1268134200000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Would someone tell me how this happened? We were the fucking vanguard of chips in this country. The Intel Quad Core was the CPU to own. Then the other guy came out with a four core CPU. Were we scared? Hell, no. Because we hit back with a little thing called the Core2 Extreme. That's four cores and a hefty price tag. For the bragging rights. But you know what happened next? Shut up, I'm telling you what happened&mdash;the bastards went to four cores. Now we're standing around with our cocks in our hands, selling four cores and an "Extreme" moniker. Extreme or no, suddenly we're the chumps. Well, fuck it. We're going to eight cores.</p></htmltext>
<tokenext>Would someone tell me how this happened ?
We were the fucking vanguard of chips in this country .
The Intel Quad Core was the CPU to own .
Then the other guy came out with a four core CPU .
Were we scared ?
Hell , no .
Because we hit back with a little thing called the Core2 Extreme .
That 's four cores and a hefty price tag .
For the bragging rights .
But you know what happened next ?
Shut up , I 'm telling you what happened    the bastards went to four cores .
Now we 're standing around with our cocks in our hands , selling four cores and an " Extreme " moniker .
Extreme or no , suddenly we 're the chumps .
Well , fuck it .
We 're going to eight cores .</tokentext>
<sentencetext>Would someone tell me how this happened?
We were the fucking vanguard of chips in this country.
The Intel Quad Core was the CPU to own.
Then the other guy came out with a four core CPU.
Were we scared?
Hell, no.
Because we hit back with a little thing called the Core2 Extreme.
That's four cores and a hefty price tag.
For the bragging rights.
But you know what happened next?
Shut up, I'm telling you what happened—the bastards went to four cores.
Now we're standing around with our cocks in our hands, selling four cores and an "Extreme" moniker.
Extreme or no, suddenly we're the chumps.
Well, fuck it.
We're going to eight cores.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31409558</id>
	<title>Re:Cost?</title>
	<author>BiggerIsBetter</author>
	<datestamp>1268065860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>I don't see anyone talking about the cost of this, any ideas?</p></div><p>If you have to ask, you can't afford it.</p></div>
	</htmltext>
<tokenext>I do n't see anyone talking about the cost of this , any ideas ? If you have to ask , you ca n't afford it .</tokentext>
<sentencetext>I don't see anyone talking about the cost of this, any ideas?If you have to ask, you can't afford it.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408770</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406694</id>
	<title>Re:March of the penguins</title>
	<author>xmas2003</author>
	<datestamp>1268047740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>If it's penguins you want on your screen:
<br>
<a href="http://www.komar.org/faq/travel/vacation/antarctica/gentoo-penguins/" title="komar.org">Gentoo Penguins</a> [komar.org] - <a href="http://www.komar.org/faq/travel/vacation/antarctica/south-georgia\_saint-andrews\_king-penguins/" title="komar.org">King Penguins</a> [komar.org] - <a href="http://www.komar.org/faq/travel/vacation/antarctica/skua-attack-penguin-battle/" title="komar.org">Penguin being attacked by a Skua!</a> [komar.org]</htmltext>
<tokenext>If it 's penguins you want on your screen : Gentoo Penguins [ komar.org ] - King Penguins [ komar.org ] - Penguin being attacked by a Skua !
[ komar.org ]</tokentext>
<sentencetext>If it's penguins you want on your screen:

Gentoo Penguins [komar.org] - King Penguins [komar.org] - Penguin being attacked by a Skua!
[komar.org]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405630</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406738</id>
	<title>Re:Hyperthreading</title>
	<author>jcupitt65</author>
	<datestamp>1268047860000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>Hyperthreading used to suck, but it works pretty well now. In the benchmarks I've done with my code I see about a 60\% speedup.

</p><p> <a href="http://www.vips.ecs.soton.ac.uk/index.php?title=Benchmarks#Results\_summary" title="soton.ac.uk">http://www.vips.ecs.soton.ac.uk/index.php?title=Benchmarks#Results\_summary</a> [soton.ac.uk]</p></htmltext>
<tokenext>Hyperthreading used to suck , but it works pretty well now .
In the benchmarks I 've done with my code I see about a 60 \ % speedup .
http : //www.vips.ecs.soton.ac.uk/index.php ? title = Benchmarks # Results \ _summary [ soton.ac.uk ]</tokentext>
<sentencetext>Hyperthreading used to suck, but it works pretty well now.
In the benchmarks I've done with my code I see about a 60\% speedup.
http://www.vips.ecs.soton.ac.uk/index.php?title=Benchmarks#Results\_summary [soton.ac.uk]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406336</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406162</id>
	<title>Licensed per Core</title>
	<author>Anonymous</author>
	<datestamp>1268046060000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>Software developers are going to have to figure out a new approach to licensing many of their products. VMware, for example, allows you to use a single license for every processor of 6 or fewer cores... how many people are going to pay for another license for the 2 extra cores? I see per core licenses coming in the near future.</htmltext>
<tokenext>Software developers are going to have to figure out a new approach to licensing many of their products .
VMware , for example , allows you to use a single license for every processor of 6 or fewer cores... how many people are going to pay for another license for the 2 extra cores ?
I see per core licenses coming in the near future .</tokentext>
<sentencetext>Software developers are going to have to figure out a new approach to licensing many of their products.
VMware, for example, allows you to use a single license for every processor of 6 or fewer cores... how many people are going to pay for another license for the 2 extra cores?
I see per core licenses coming in the near future.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405630</id>
	<title>March of the penguins</title>
	<author>suso</author>
	<datestamp>1268044140000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext><p>Ah! My dream of the day when I can boot up and see penguins taking up the entire screen is almost here.</p></htmltext>
<tokenext>Ah !
My dream of the day when I can boot up and see penguins taking up the entire screen is almost here .</tokentext>
<sentencetext>Ah!
My dream of the day when I can boot up and see penguins taking up the entire screen is almost here.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406714</id>
	<title>Joy! I hope it comes with ...</title>
	<author>freaker\_TuC</author>
	<datestamp>1268047800000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>... super cool looking white plastic mold which fits my sochet and cool looking notepad!</p></htmltext>
<tokenext>... super cool looking white plastic mold which fits my sochet and cool looking notepad !</tokentext>
<sentencetext>... super cool looking white plastic mold which fits my sochet and cool looking notepad!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31412484</id>
	<title>Re:Hyperthreading</title>
	<author>Anonymous</author>
	<datestamp>1268144940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>WoW, never knew this. Looks like I could gain some performance by disabling my hyper-threading!</htmltext>
<tokenext>WoW , never knew this .
Looks like I could gain some performance by disabling my hyper-threading !</tokentext>
<sentencetext>WoW, never knew this.
Looks like I could gain some performance by disabling my hyper-threading!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406336</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407940</id>
	<title>Re:March of the penguins</title>
	<author>MrCrassic</author>
	<datestamp>1268053620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>What are  you talking about? I can finally run Vista with that! If I'm lucky, I might even get Aero!</p></htmltext>
<tokenext>What are you talking about ?
I can finally run Vista with that !
If I 'm lucky , I might even get Aero !</tokentext>
<sentencetext>What are  you talking about?
I can finally run Vista with that!
If I'm lucky, I might even get Aero!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405630</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408492</id>
	<title>Re:Somebody's gotta ask...</title>
	<author>Dragoniz3r</author>
	<datestamp>1268057040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Didn't you hear? They're just "demo units"...<nobr> <wbr></nobr>;) riiiiiiiight</htmltext>
<tokenext>Did n't you hear ?
They 're just " demo units " ... ; ) riiiiiiiight</tokentext>
<sentencetext>Didn't you hear?
They're just "demo units"... ;) riiiiiiiight</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407540</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407688</id>
	<title>Re:Finally!</title>
	<author>aldld</author>
	<datestamp>1268052180000</datestamp>
	<modclass>Redundant</modclass>
	<modscore>0</modscore>
	<htmltext>But can it run Linux?</htmltext>
<tokenext>But can it run Linux ?</tokentext>
<sentencetext>But can it run Linux?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405824</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31409408</id>
	<title>Re:IBM Power7 also has 8 cores</title>
	<author>Anpheus</author>
	<datestamp>1268064600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Your post is all sorts of confusing.</p><blockquote><div><p>It is actually based on 45nm technology compared to Intel's latest 32nm.</p></div></blockquote><p>That makes it sound like more nanometers is better. Not in this case.</p><blockquote><div><p>Both Nehalem-EX and Power7 are targeting low-end server market, so it should be interesting battle.</p></div></blockquote><p>You have an extremely interesting definition of low end.</p></div>
	</htmltext>
<tokenext>Your post is all sorts of confusing.It is actually based on 45nm technology compared to Intel 's latest 32nm.That makes it sound like more nanometers is better .
Not in this case.Both Nehalem-EX and Power7 are targeting low-end server market , so it should be interesting battle.You have an extremely interesting definition of low end .</tokentext>
<sentencetext>Your post is all sorts of confusing.It is actually based on 45nm technology compared to Intel's latest 32nm.That makes it sound like more nanometers is better.
Not in this case.Both Nehalem-EX and Power7 are targeting low-end server market, so it should be interesting battle.You have an extremely interesting definition of low end.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405910</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405814</id>
	<title>Re:It's obvious</title>
	<author>recoiledsnake</author>
	<datestamp>1268044980000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>This processor is meant for servers, because they're Xeon, and with all the Web 2.0 and Cloud computing going on, servers are always hungry for more power.</p></htmltext>
<tokenext>This processor is meant for servers , because they 're Xeon , and with all the Web 2.0 and Cloud computing going on , servers are always hungry for more power .</tokentext>
<sentencetext>This processor is meant for servers, because they're Xeon, and with all the Web 2.0 and Cloud computing going on, servers are always hungry for more power.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405680</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406508</id>
	<title>Re:programs compatible with 8 cores</title>
	<author>Wdomburg</author>
	<datestamp>1268047200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Ever heard of a little thing called visualization?<nobr> <wbr></nobr>:)  Even in the single-host space, there is plenty of software with a high degree of parallelism and horizontal scalability.  We have several types of servers that run hundreds (and in some cases) thousands of threads or processes.</p></htmltext>
<tokenext>Ever heard of a little thing called visualization ?
: ) Even in the single-host space , there is plenty of software with a high degree of parallelism and horizontal scalability .
We have several types of servers that run hundreds ( and in some cases ) thousands of threads or processes .</tokentext>
<sentencetext>Ever heard of a little thing called visualization?
:)  Even in the single-host space, there is plenty of software with a high degree of parallelism and horizontal scalability.
We have several types of servers that run hundreds (and in some cases) thousands of threads or processes.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405896</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405896</id>
	<title>programs compatible with 8 cores</title>
	<author>Anonymous</author>
	<datestamp>1268045280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>But how long before game makers and other software companies write code that can take advantage of all those cores?   By the time they do, Intel or AMD will have mainstream 32 or 64 core processors on the market.</htmltext>
<tokenext>But how long before game makers and other software companies write code that can take advantage of all those cores ?
By the time they do , Intel or AMD will have mainstream 32 or 64 core processors on the market .</tokentext>
<sentencetext>But how long before game makers and other software companies write code that can take advantage of all those cores?
By the time they do, Intel or AMD will have mainstream 32 or 64 core processors on the market.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408358</id>
	<title>Re:When will Moore's Law apply to Cores?</title>
	<author>rm999</author>
	<datestamp>1268056200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>A core is theoretically a fixed number of transistors, and Moore's law (depending on which version you quote) essentially states that the number of transistors per chip will double every 18 months. Therefore, a corollary to Moore's Law is that the number of cores could double every 18 months. I say could because engineers may decide to put more transistors in each core, which would slow down the core increase.</p><p>Also, there is another caveat: many applications will never be able to take advantage of an insane number of cores, so most chips may not follow this route. I actually believe the market for chips will split into two, and that this split has already began. There will be the chips that follow your core-doubling law. These will target gamers (replacing the GPU), scientists, and power-users (photographers, digital artists, etc). The second type of chip will settle on a few cores and will be extremely small/cheap/efficient. These will be the Atom processors of the future, and could provide more than enough power for the typical user.</p><p>I believe the cpu demand of most users has mostly plateaued in the past few years (my netbook has the same power as my computer from almost ten years ago), and that is why I wouldn't call chip-doubling a law. The demand for it in the mainstream simply isn't there (at least, yet).</p></div>
	</htmltext>
<tokenext>A core is theoretically a fixed number of transistors , and Moore 's law ( depending on which version you quote ) essentially states that the number of transistors per chip will double every 18 months .
Therefore , a corollary to Moore 's Law is that the number of cores could double every 18 months .
I say could because engineers may decide to put more transistors in each core , which would slow down the core increase.Also , there is another caveat : many applications will never be able to take advantage of an insane number of cores , so most chips may not follow this route .
I actually believe the market for chips will split into two , and that this split has already began .
There will be the chips that follow your core-doubling law .
These will target gamers ( replacing the GPU ) , scientists , and power-users ( photographers , digital artists , etc ) .
The second type of chip will settle on a few cores and will be extremely small/cheap/efficient .
These will be the Atom processors of the future , and could provide more than enough power for the typical user.I believe the cpu demand of most users has mostly plateaued in the past few years ( my netbook has the same power as my computer from almost ten years ago ) , and that is why I would n't call chip-doubling a law .
The demand for it in the mainstream simply is n't there ( at least , yet ) .</tokentext>
<sentencetext>A core is theoretically a fixed number of transistors, and Moore's law (depending on which version you quote) essentially states that the number of transistors per chip will double every 18 months.
Therefore, a corollary to Moore's Law is that the number of cores could double every 18 months.
I say could because engineers may decide to put more transistors in each core, which would slow down the core increase.Also, there is another caveat: many applications will never be able to take advantage of an insane number of cores, so most chips may not follow this route.
I actually believe the market for chips will split into two, and that this split has already began.
There will be the chips that follow your core-doubling law.
These will target gamers (replacing the GPU), scientists, and power-users (photographers, digital artists, etc).
The second type of chip will settle on a few cores and will be extremely small/cheap/efficient.
These will be the Atom processors of the future, and could provide more than enough power for the typical user.I believe the cpu demand of most users has mostly plateaued in the past few years (my netbook has the same power as my computer from almost ten years ago), and that is why I wouldn't call chip-doubling a law.
The demand for it in the mainstream simply isn't there (at least, yet).
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406254</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407930</id>
	<title>Re:March of the penguins</title>
	<author>KillShill</author>
	<datestamp>1268053560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Penguins and monopolists go together like.... oil and water.</p></htmltext>
<tokenext>Penguins and monopolists go together like.... oil and water .</tokentext>
<sentencetext>Penguins and monopolists go together like.... oil and water.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405630</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405836</id>
	<title>David, take your Facebook group and fuck off.</title>
	<author>Anonymous</author>
	<datestamp>1268045040000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>-1</modscore>
	<htmltext><p>For fuck's sake, David. That's clearly your group, based on the admin's name.</p><p>Please don't spam us with your shitty attempt at humor and cleverness. You fail, David. You fucking fail.</p></htmltext>
<tokenext>For fuck 's sake , David .
That 's clearly your group , based on the admin 's name.Please do n't spam us with your shitty attempt at humor and cleverness .
You fail , David .
You fucking fail .</tokentext>
<sentencetext>For fuck's sake, David.
That's clearly your group, based on the admin's name.Please don't spam us with your shitty attempt at humor and cleverness.
You fail, David.
You fucking fail.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405680</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31411524</id>
	<title>Re:Ditch x86</title>
	<author>MessyBlob</author>
	<datestamp>1268134380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Although I've been marked as 'troll' (boo!), I'm pleased that some relevant points have been raised in reply.<nobr> <wbr></nobr>;o) </p><p>I think we're still waiting for a unified breakthrough in core, memory, code, and data design; I can't say what those breakthroughs are, of course.</p><p>I'm not complaining about CISC vs RISC, but that our current memory architecture does not serve our instruction sets well, our layered and dynamically abstracted OO code is awkward to implement without queues of indirection, and our compilers are slaves to all the above. </p><p>&lt;/untroll&gt;</p></htmltext>
<tokenext>Although I 've been marked as 'troll ' ( boo !
) , I 'm pleased that some relevant points have been raised in reply .
; o ) I think we 're still waiting for a unified breakthrough in core , memory , code , and data design ; I ca n't say what those breakthroughs are , of course.I 'm not complaining about CISC vs RISC , but that our current memory architecture does not serve our instruction sets well , our layered and dynamically abstracted OO code is awkward to implement without queues of indirection , and our compilers are slaves to all the above .</tokentext>
<sentencetext>Although I've been marked as 'troll' (boo!
), I'm pleased that some relevant points have been raised in reply.
;o) I think we're still waiting for a unified breakthrough in core, memory, code, and data design; I can't say what those breakthroughs are, of course.I'm not complaining about CISC vs RISC, but that our current memory architecture does not serve our instruction sets well, our layered and dynamically abstracted OO code is awkward to implement without queues of indirection, and our compilers are slaves to all the above. </sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408376</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31409940</id>
	<title>Re:IBM Power7 also has 8 cores</title>
	<author>bertok</author>
	<datestamp>1268070300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>If it is matter of core-war, IBM's latest Power7 also has 8 cores. It is actually based on 45nm technology compared to Intel's latest 32nm. What makes Power7 exciting is that it has on-die 32MB L3 cache. They achieved this by introducing eDRAM (embedded DRAM) in the technology. Both Nehalem-EX and Power7 are targeting low-end server market, so it should be interesting battle. </p><p><a href="http://arstechnica.com/hardware/news/2009/09/ibms-8-core-power7-twice-the-muscle-half-the-transistors.ars" title="arstechnica.com">http://arstechnica.com/hardware/news/2009/09/ibms-8-core-power7-twice-the-muscle-half-the-transistors.ars</a> [arstechnica.com] </p></div><p>Since when is the "low-end server market" made up of 8-core 8-socket machines? Are you from the... <i>future</i>?</p></div>
	</htmltext>
<tokenext>If it is matter of core-war , IBM 's latest Power7 also has 8 cores .
It is actually based on 45nm technology compared to Intel 's latest 32nm .
What makes Power7 exciting is that it has on-die 32MB L3 cache .
They achieved this by introducing eDRAM ( embedded DRAM ) in the technology .
Both Nehalem-EX and Power7 are targeting low-end server market , so it should be interesting battle .
http : //arstechnica.com/hardware/news/2009/09/ibms-8-core-power7-twice-the-muscle-half-the-transistors.ars [ arstechnica.com ] Since when is the " low-end server market " made up of 8-core 8-socket machines ?
Are you from the... future ?</tokentext>
<sentencetext>If it is matter of core-war, IBM's latest Power7 also has 8 cores.
It is actually based on 45nm technology compared to Intel's latest 32nm.
What makes Power7 exciting is that it has on-die 32MB L3 cache.
They achieved this by introducing eDRAM (embedded DRAM) in the technology.
Both Nehalem-EX and Power7 are targeting low-end server market, so it should be interesting battle.
http://arstechnica.com/hardware/news/2009/09/ibms-8-core-power7-twice-the-muscle-half-the-transistors.ars [arstechnica.com] Since when is the "low-end server market" made up of 8-core 8-socket machines?
Are you from the... future?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405910</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31409510</id>
	<title>Re:Balance</title>
	<author>daveime</author>
	<datestamp>1268065380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I was with you right up until "boxen". Then you ceased to exist in my universe.</p></htmltext>
<tokenext>I was with you right up until " boxen " .
Then you ceased to exist in my universe .</tokentext>
<sentencetext>I was with you right up until "boxen".
Then you ceased to exist in my universe.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405868</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408184</id>
	<title>Re:programs compatible with 8 cores</title>
	<author>Klintus Fang</author>
	<datestamp>1268055000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>How long before game makers right code that supports this new chip?<br>Answer: a very long time.  This is a xeon part designed for large database servers.  It isn't intended for desktops.  Some fool might try to put it into a gaming rig eventually, but that person...really will be worth of the title "fool".  That would be like putting an engine designed for a freight train into a ferrari.</p><p>When will other software vendors have code that supports this many cores?<br>Answer:  they already do.  the companies that write database management software for very large backend database servers already have code that scales to very large core counts.  As do many HPC software vendors.  That is the intended market segment for this chip and that market segment has lots of software that is ready to burn through all those cores now.</p></htmltext>
<tokenext>How long before game makers right code that supports this new chip ? Answer : a very long time .
This is a xeon part designed for large database servers .
It is n't intended for desktops .
Some fool might try to put it into a gaming rig eventually , but that person...really will be worth of the title " fool " .
That would be like putting an engine designed for a freight train into a ferrari.When will other software vendors have code that supports this many cores ? Answer : they already do .
the companies that write database management software for very large backend database servers already have code that scales to very large core counts .
As do many HPC software vendors .
That is the intended market segment for this chip and that market segment has lots of software that is ready to burn through all those cores now .</tokentext>
<sentencetext>How long before game makers right code that supports this new chip?Answer: a very long time.
This is a xeon part designed for large database servers.
It isn't intended for desktops.
Some fool might try to put it into a gaming rig eventually, but that person...really will be worth of the title "fool".
That would be like putting an engine designed for a freight train into a ferrari.When will other software vendors have code that supports this many cores?Answer:  they already do.
the companies that write database management software for very large backend database servers already have code that scales to very large core counts.
As do many HPC software vendors.
That is the intended market segment for this chip and that market segment has lots of software that is ready to burn through all those cores now.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405896</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405916</id>
	<title>Re:Balance</title>
	<author>Anonymous</author>
	<datestamp>1268045340000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>Does it have the memory I/O bandwidth to keep up with the CPUs?</p></div><p>Yes. This is what quickpath is all about. It is a genuine, modern, NUMA architecture.</p><p><div class="quote"><p> When will I be able to actually buy a mother board with 8 of these 8 core CPUs,</p></div><p>Shortly after launch, if not at launch. Of course, that board will run $2k-$3k, and will only take ECC RAM, but you're not really asking this if that's a problem.</p><p><div class="quote"><p> and what kind of a frame rate would Crysis get on that rig?</p></div><p>At lowest settings, I'm guessing about 2FPS. That's because it wouldn't have a video card, because computer cluster machines generally don't.</p></div>
	</htmltext>
<tokenext>Does it have the memory I/O bandwidth to keep up with the CPUs ? Yes .
This is what quickpath is all about .
It is a genuine , modern , NUMA architecture .
When will I be able to actually buy a mother board with 8 of these 8 core CPUs,Shortly after launch , if not at launch .
Of course , that board will run $ 2k- $ 3k , and will only take ECC RAM , but you 're not really asking this if that 's a problem .
and what kind of a frame rate would Crysis get on that rig ? At lowest settings , I 'm guessing about 2FPS .
That 's because it would n't have a video card , because computer cluster machines generally do n't .</tokentext>
<sentencetext>Does it have the memory I/O bandwidth to keep up with the CPUs?Yes.
This is what quickpath is all about.
It is a genuine, modern, NUMA architecture.
When will I be able to actually buy a mother board with 8 of these 8 core CPUs,Shortly after launch, if not at launch.
Of course, that board will run $2k-$3k, and will only take ECC RAM, but you're not really asking this if that's a problem.
and what kind of a frame rate would Crysis get on that rig?At lowest settings, I'm guessing about 2FPS.
That's because it wouldn't have a video card, because computer cluster machines generally don't.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405740</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406288</id>
	<title>Re:IBM Power7 also has 8 cores</title>
	<author>ender-</author>
	<datestamp>1268046480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I've had 8-core servers for over a year now. Sun T5220s [1 x 8-core x 8-thread] and T5240s [2 x 8-core x 8-thread]. They may not have the raw number crunching ability of the Intel Chips ( and I know nothing about the IBM chips ), but these things can multi-thread like nobody's business!<nobr> <wbr></nobr>;) Love seeing the OS report 128 processors!</p><p>For the real cpu-hungry stuff - aka. Windows running on ESX<nobr> <wbr></nobr>:) - we have some 16 core [4 x 4-core] X4450's. I wouldn't mind getting 4 x 8-core Nehalem chips in there.</p></htmltext>
<tokenext>I 've had 8-core servers for over a year now .
Sun T5220s [ 1 x 8-core x 8-thread ] and T5240s [ 2 x 8-core x 8-thread ] .
They may not have the raw number crunching ability of the Intel Chips ( and I know nothing about the IBM chips ) , but these things can multi-thread like nobody 's business !
; ) Love seeing the OS report 128 processors ! For the real cpu-hungry stuff - aka .
Windows running on ESX : ) - we have some 16 core [ 4 x 4-core ] X4450 's .
I would n't mind getting 4 x 8-core Nehalem chips in there .</tokentext>
<sentencetext>I've had 8-core servers for over a year now.
Sun T5220s [1 x 8-core x 8-thread] and T5240s [2 x 8-core x 8-thread].
They may not have the raw number crunching ability of the Intel Chips ( and I know nothing about the IBM chips ), but these things can multi-thread like nobody's business!
;) Love seeing the OS report 128 processors!For the real cpu-hungry stuff - aka.
Windows running on ESX :) - we have some 16 core [4 x 4-core] X4450's.
I wouldn't mind getting 4 x 8-core Nehalem chips in there.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405910</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31421536</id>
	<title>Re:Finally!</title>
	<author>QuantumRiff</author>
	<datestamp>1268141700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>And in 15 years, nuclear fusion will still somehow be 15 years in the future...</p></htmltext>
<tokenext>And in 15 years , nuclear fusion will still somehow be 15 years in the future.. .</tokentext>
<sentencetext>And in 15 years, nuclear fusion will still somehow be 15 years in the future...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406860</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406376</id>
	<title>Re:Licensed per Core</title>
	<author>Anonymous</author>
	<datestamp>1268046780000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I just see retarded, arbitrary licensing restrictions going away.</p><p>What if I only want your software to run on 2 of those 8 cores, and I want to use the remaining 4-cores-worth of the license on another box? Oh? You don't allow that? I'll go buy the competition's product.</p><p>Or, more likely, you'll just cave in to my demands and sell traditional per-box licenses, regardless of "cores" or "processors" or whatever.</p></htmltext>
<tokenext>I just see retarded , arbitrary licensing restrictions going away.What if I only want your software to run on 2 of those 8 cores , and I want to use the remaining 4-cores-worth of the license on another box ?
Oh ? You do n't allow that ?
I 'll go buy the competition 's product.Or , more likely , you 'll just cave in to my demands and sell traditional per-box licenses , regardless of " cores " or " processors " or whatever .</tokentext>
<sentencetext>I just see retarded, arbitrary licensing restrictions going away.What if I only want your software to run on 2 of those 8 cores, and I want to use the remaining 4-cores-worth of the license on another box?
Oh? You don't allow that?
I'll go buy the competition's product.Or, more likely, you'll just cave in to my demands and sell traditional per-box licenses, regardless of "cores" or "processors" or whatever.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406162</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31410674</id>
	<title>Re:It's obvious</title>
	<author>Sulphur</author>
	<datestamp>1268077980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Run a REAL operating system, like VISTA!</p><p>Fields of penguins, and Vistas of penguins.</p></htmltext>
<tokenext>Run a REAL operating system , like VISTA ! Fields of penguins , and Vistas of penguins .</tokentext>
<sentencetext>Run a REAL operating system, like VISTA!Fields of penguins, and Vistas of penguins.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405680</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408090</id>
	<title>Re:Finally!</title>
	<author>kimvette</author>
	<datestamp>1268054400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Yes, but unfortunately it will only run Crysis on Windows XP.  For Vista, you have to wait for the 16-core, I'm afraid.<nobr> <wbr></nobr>:-/</p></htmltext>
<tokenext>Yes , but unfortunately it will only run Crysis on Windows XP .
For Vista , you have to wait for the 16-core , I 'm afraid .
: -/</tokentext>
<sentencetext>Yes, but unfortunately it will only run Crysis on Windows XP.
For Vista, you have to wait for the 16-core, I'm afraid.
:-/</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405824</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31410552</id>
	<title>Looks like its been around for a few months testin</title>
	<author>cloakedpegasus</author>
	<datestamp>1268076600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><a href="http://browse.geekbench.ca/geekbench2/view/198947" title="geekbench.ca" rel="nofollow">http://browse.geekbench.ca/geekbench2/view/198947</a> [geekbench.ca]</htmltext>
<tokenext>http : //browse.geekbench.ca/geekbench2/view/198947 [ geekbench.ca ]</tokentext>
<sentencetext>http://browse.geekbench.ca/geekbench2/view/198947 [geekbench.ca]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405740</id>
	<title>Balance</title>
	<author>Anonymous</author>
	<datestamp>1268044620000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>Does it have the memory I/O bandwidth to keep up with the CPUs? When will I be able to actually buy a mother board with 8 of these 8 core CPUs, and what kind of a frame rate would Crysis get on that rig?</htmltext>
<tokenext>Does it have the memory I/O bandwidth to keep up with the CPUs ?
When will I be able to actually buy a mother board with 8 of these 8 core CPUs , and what kind of a frame rate would Crysis get on that rig ?</tokentext>
<sentencetext>Does it have the memory I/O bandwidth to keep up with the CPUs?
When will I be able to actually buy a mother board with 8 of these 8 core CPUs, and what kind of a frame rate would Crysis get on that rig?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406254</id>
	<title>When will Moore's Law apply to Cores?</title>
	<author>Anonymous</author>
	<datestamp>1268046420000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>So can we now expect a doubling of cores every 18 months?</p></htmltext>
<tokenext>So can we now expect a doubling of cores every 18 months ?</tokentext>
<sentencetext>So can we now expect a doubling of cores every 18 months?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31409798</id>
	<title>Re:Finally!</title>
	<author>DiEx-15</author>
	<datestamp>1268068620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Yes but... It still won't end the Duke Nukem Forever jokes.</htmltext>
<tokenext>Yes but... It still wo n't end the Duke Nukem Forever jokes .</tokentext>
<sentencetext>Yes but... It still won't end the Duke Nukem Forever jokes.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405824</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406706</id>
	<title>Re:Hyperthreading</title>
	<author>Anonymous</author>
	<datestamp>1268047800000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Hyperthreading used to suck due to Intel's initial implementation. The modern versions are decent and should provide a boost. SMT (which Hyperthreading) is in many cases a good/interesting design decision.</p><p>http://en.wikipedia.org/wiki/Simultaneous\_multithreading</p></htmltext>
<tokenext>Hyperthreading used to suck due to Intel 's initial implementation .
The modern versions are decent and should provide a boost .
SMT ( which Hyperthreading ) is in many cases a good/interesting design decision.http : //en.wikipedia.org/wiki/Simultaneous \ _multithreading</tokentext>
<sentencetext>Hyperthreading used to suck due to Intel's initial implementation.
The modern versions are decent and should provide a boost.
SMT (which Hyperthreading) is in many cases a good/interesting design decision.http://en.wikipedia.org/wiki/Simultaneous\_multithreading</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406336</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405680</id>
	<title>It's obvious</title>
	<author>David Gerard</author>
	<datestamp>1268044380000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext><p>Run a REAL operating system, like <a href="http://www.facebook.com/group.php?gid=83481756967" title="facebook.com" rel="nofollow">VISTA!</a> [facebook.com]</p></htmltext>
<tokenext>Run a REAL operating system , like VISTA !
[ facebook.com ]</tokentext>
<sentencetext>Run a REAL operating system, like VISTA!
[facebook.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31412070</id>
	<title>Re:When will Moore's Law apply to Cores?</title>
	<author>Anonymous</author>
	<datestamp>1268141700000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>64 cores is more than enough!</p></htmltext>
<tokenext>64 cores is more than enough !</tokentext>
<sentencetext>64 cores is more than enough!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406254</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31411380</id>
	<title>Re:Hyperthreading</title>
	<author>Slashcrap</author>
	<datestamp>1268132280000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>Why are they are still announcing hyperthreading? It was established long-ago that it had no benefit. It's been off on any machines I've ever purchased.</p></div><p>Oh God, thank you so much for doing this.</p><p>You have given me the confidence to stand up in front of the entire World and scream "I too am a dumb fucking faggot!".</p></div>
	</htmltext>
<tokenext>Why are they are still announcing hyperthreading ?
It was established long-ago that it had no benefit .
It 's been off on any machines I 've ever purchased.Oh God , thank you so much for doing this.You have given me the confidence to stand up in front of the entire World and scream " I too am a dumb fucking faggot !
" .</tokentext>
<sentencetext>Why are they are still announcing hyperthreading?
It was established long-ago that it had no benefit.
It's been off on any machines I've ever purchased.Oh God, thank you so much for doing this.You have given me the confidence to stand up in front of the entire World and scream "I too am a dumb fucking faggot!
".
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406336</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407200</id>
	<title>Re:Balance</title>
	<author>Anonymous</author>
	<datestamp>1268049660000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>and what kind of a frame rate would Crysis get on that rig?</p></div><p>maybe 20fps?</p></div>
	</htmltext>
<tokenext>and what kind of a frame rate would Crysis get on that rig ? maybe 20fps ?</tokentext>
<sentencetext>and what kind of a frame rate would Crysis get on that rig?maybe 20fps?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405740</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31413792</id>
	<title>Why the same thing we do every night...</title>
	<author>DarthVain</author>
	<datestamp>1268150820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>try and take over the world!</p></htmltext>
<tokenext>try and take over the world !</tokentext>
<sentencetext>try and take over the world!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407082</id>
	<title>Re:Balance</title>
	<author>Anonymous</author>
	<datestamp>1268049240000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>a medium business can probalby run their entire datacenter on 2 boxen + an entry level SAN.</p></div></blockquote><p>
I'll give you that you could probably virtualise a hell of a lot of stuff on top of two of these boxes in an ESX cluster (although you'd have no redundancy) but one SAN? You wouldn't have the IOPS.<br>
<br>
Sorry, I'm being a pedant...</p></div>
	</htmltext>
<tokenext>a medium business can probalby run their entire datacenter on 2 boxen + an entry level SAN .
I 'll give you that you could probably virtualise a hell of a lot of stuff on top of two of these boxes in an ESX cluster ( although you 'd have no redundancy ) but one SAN ?
You would n't have the IOPS .
Sorry , I 'm being a pedant.. .</tokentext>
<sentencetext>a medium business can probalby run their entire datacenter on 2 boxen + an entry level SAN.
I'll give you that you could probably virtualise a hell of a lot of stuff on top of two of these boxes in an ESX cluster (although you'd have no redundancy) but one SAN?
You wouldn't have the IOPS.
Sorry, I'm being a pedant...
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405868</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405824</id>
	<title>Finally!</title>
	<author>Anonymous</author>
	<datestamp>1268044980000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext>The end to "can it run Crysis?" jokes!</htmltext>
<tokenext>The end to " can it run Crysis ?
" jokes !</tokentext>
<sentencetext>The end to "can it run Crysis?
" jokes!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405982</id>
	<title>Re:Balance</title>
	<author>afidel</author>
	<datestamp>1268045580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Basically yes, it actually has more bandwidth for remote NUMA access then current Nehalem-EP systems but fewer memory lanes per core so less bandwidth under high contention or more bandwidth under low contention. IBM has announced the x3690 X5 which has 32 DIMM slots for two EX sockets which will be a killer DB/Virtualization platform if priced competitively.</htmltext>
<tokenext>Basically yes , it actually has more bandwidth for remote NUMA access then current Nehalem-EP systems but fewer memory lanes per core so less bandwidth under high contention or more bandwidth under low contention .
IBM has announced the x3690 X5 which has 32 DIMM slots for two EX sockets which will be a killer DB/Virtualization platform if priced competitively .</tokentext>
<sentencetext>Basically yes, it actually has more bandwidth for remote NUMA access then current Nehalem-EP systems but fewer memory lanes per core so less bandwidth under high contention or more bandwidth under low contention.
IBM has announced the x3690 X5 which has 32 DIMM slots for two EX sockets which will be a killer DB/Virtualization platform if priced competitively.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405740</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407540</id>
	<title>Somebody's gotta ask...</title>
	<author>unitron</author>
	<datestamp>1268051400000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext><p>So, how soon until newegg.com has the fake ones in stock?</p></htmltext>
<tokenext>So , how soon until newegg.com has the fake ones in stock ?</tokentext>
<sentencetext>So, how soon until newegg.com has the fake ones in stock?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31412074</id>
	<title>Re:Ditch x86</title>
	<author>Anonymous</author>
	<datestamp>1268141760000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Compare Atom to Cortex.<br>Using same process, what's the performance, power consumption and die size of each?</p></htmltext>
<tokenext>Compare Atom to Cortex.Using same process , what 's the performance , power consumption and die size of each ?</tokentext>
<sentencetext>Compare Atom to Cortex.Using same process, what's the performance, power consumption and die size of each?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408376</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31420458</id>
	<title>Re:Balance</title>
	<author>petermgreen</author>
	<datestamp>1268135160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>I'm(perhaps optimistically) assuming that that means all the RAM in an up to 8 socket system wouldn't be more than one hop away from any core.</i><br>These only have 4 QPI links so in a system with more than 4 sockets (remember you have to hook up the IO hubs somewhere) 8-socket system some processors will be two hops away from each other.</p><p>there is a diagram of an example 8-core setup at <a href="http://www.hardwarecanucks.com/wp-content/uploads/intel\_nehalem-ex-8-core.jpg" title="hardwarecanucks.com">http://www.hardwarecanucks.com/wp-content/uploads/intel\_nehalem-ex-8-core.jpg</a> [hardwarecanucks.com]</p></htmltext>
<tokenext>I 'm ( perhaps optimistically ) assuming that that means all the RAM in an up to 8 socket system would n't be more than one hop away from any core.These only have 4 QPI links so in a system with more than 4 sockets ( remember you have to hook up the IO hubs somewhere ) 8-socket system some processors will be two hops away from each other.there is a diagram of an example 8-core setup at http : //www.hardwarecanucks.com/wp-content/uploads/intel \ _nehalem-ex-8-core.jpg [ hardwarecanucks.com ]</tokentext>
<sentencetext>I'm(perhaps optimistically) assuming that that means all the RAM in an up to 8 socket system wouldn't be more than one hop away from any core.These only have 4 QPI links so in a system with more than 4 sockets (remember you have to hook up the IO hubs somewhere) 8-socket system some processors will be two hops away from each other.there is a diagram of an example 8-core setup at http://www.hardwarecanucks.com/wp-content/uploads/intel\_nehalem-ex-8-core.jpg [hardwarecanucks.com]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405876</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406710</id>
	<title>No benefit?</title>
	<author>Xocet\_00</author>
	<datestamp>1268047800000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><a href="http://ixbtlabs.com/articles3/cpu/ci7-turbo-ht-p1.html" title="ixbtlabs.com">This article</a> [ixbtlabs.com] outlines the various circumstances under which hyperthreading either benefits or impedes performance. While it's true that <i>on average</i> the benefit was zero (meaning about half of what they tested was faster, and about half was slower) there are clearly a lot of applications that see significant performance gains. <br> <br>
It should also be noted that the applications that benefit are ones that would generally be used in Xeon (server and workstation) machines. Further, most of the applications that failed to benefit from hyperthreading are not written to take advantage of many (more than one or two) cores. As applications are updated for "many core" systems, it is likely that the benefit from hyperthreading will become more significant.<br> <br>In any case, it is far from "established" that hyperthreading has "no benefit."</htmltext>
<tokenext>This article [ ixbtlabs.com ] outlines the various circumstances under which hyperthreading either benefits or impedes performance .
While it 's true that on average the benefit was zero ( meaning about half of what they tested was faster , and about half was slower ) there are clearly a lot of applications that see significant performance gains .
It should also be noted that the applications that benefit are ones that would generally be used in Xeon ( server and workstation ) machines .
Further , most of the applications that failed to benefit from hyperthreading are not written to take advantage of many ( more than one or two ) cores .
As applications are updated for " many core " systems , it is likely that the benefit from hyperthreading will become more significant .
In any case , it is far from " established " that hyperthreading has " no benefit .
"</tokentext>
<sentencetext>This article [ixbtlabs.com] outlines the various circumstances under which hyperthreading either benefits or impedes performance.
While it's true that on average the benefit was zero (meaning about half of what they tested was faster, and about half was slower) there are clearly a lot of applications that see significant performance gains.
It should also be noted that the applications that benefit are ones that would generally be used in Xeon (server and workstation) machines.
Further, most of the applications that failed to benefit from hyperthreading are not written to take advantage of many (more than one or two) cores.
As applications are updated for "many core" systems, it is likely that the benefit from hyperthreading will become more significant.
In any case, it is far from "established" that hyperthreading has "no benefit.
"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406336</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407826</id>
	<title>Re:Balance</title>
	<author>raddan</author>
	<datestamp>1268053020000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>You are not going to put these in your gaming rig</p></div><p>I hear this a lot, but in a modern OS (e.g., one with a good scheduler) and with modern applications (ones that use either threading or cooperating processes), you can easily use a handful of processors, and yes, with normal desktop apps.  Google Chrome, for instance, uses the cooperating process model, and for security reasons, I think you're going to start seeing [good] programmers divvy up their applications this way.  Not only does it make application security a bit easier (separate address space for each code module), but you get real CPU-level parallelism for free.  FreeBSD's new scheduler can even put threads running in the same process on different cores in some cases.
<br> <br>
My concern isn't being able to use all those cores-- it's being able to throttle or shut them off when I'm not.</p></div>
	</htmltext>
<tokenext>You are not going to put these in your gaming rigI hear this a lot , but in a modern OS ( e.g. , one with a good scheduler ) and with modern applications ( ones that use either threading or cooperating processes ) , you can easily use a handful of processors , and yes , with normal desktop apps .
Google Chrome , for instance , uses the cooperating process model , and for security reasons , I think you 're going to start seeing [ good ] programmers divvy up their applications this way .
Not only does it make application security a bit easier ( separate address space for each code module ) , but you get real CPU-level parallelism for free .
FreeBSD 's new scheduler can even put threads running in the same process on different cores in some cases .
My concern is n't being able to use all those cores-- it 's being able to throttle or shut them off when I 'm not .</tokentext>
<sentencetext>You are not going to put these in your gaming rigI hear this a lot, but in a modern OS (e.g., one with a good scheduler) and with modern applications (ones that use either threading or cooperating processes), you can easily use a handful of processors, and yes, with normal desktop apps.
Google Chrome, for instance, uses the cooperating process model, and for security reasons, I think you're going to start seeing [good] programmers divvy up their applications this way.
Not only does it make application security a bit easier (separate address space for each code module), but you get real CPU-level parallelism for free.
FreeBSD's new scheduler can even put threads running in the same process on different cores in some cases.
My concern isn't being able to use all those cores-- it's being able to throttle or shut them off when I'm not.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405868</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406514</id>
	<title>I AM....</title>
	<author>Anonymous</author>
	<datestamp>1268047200000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Imagine a Beowulf cluster with these.</p></htmltext>
<tokenext>Imagine a Beowulf cluster with these .</tokentext>
<sentencetext>Imagine a Beowulf cluster with these.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408206</id>
	<title>Re:When will Moore's Law apply to Cores?</title>
	<author>Klintus Fang</author>
	<datestamp>1268055180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>if you insist on comparing the products that Intel/AMD release into the desktop space with the products that they release into the server space 6-12 months later then yeah.  But you are comparing apples and oranges here...</htmltext>
<tokenext>if you insist on comparing the products that Intel/AMD release into the desktop space with the products that they release into the server space 6-12 months later then yeah .
But you are comparing apples and oranges here.. .</tokentext>
<sentencetext>if you insist on comparing the products that Intel/AMD release into the desktop space with the products that they release into the server space 6-12 months later then yeah.
But you are comparing apples and oranges here...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406254</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31409782</id>
	<title>Fuck it, we're doing 8 cores</title>
	<author>Anonymous</author>
	<datestamp>1268068260000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The onion said it best.</p></htmltext>
<tokenext>The onion said it best .</tokentext>
<sentencetext>The onion said it best.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406374</id>
	<title>Crapware</title>
	<author>Archangel Michael</author>
	<datestamp>1268046780000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext><p>Now I can run all my crapware, viruses, trojans, malware, and other dubious software bits at FULL SPEED! Yay</p></htmltext>
<tokenext>Now I can run all my crapware , viruses , trojans , malware , and other dubious software bits at FULL SPEED !
Yay</tokentext>
<sentencetext>Now I can run all my crapware, viruses, trojans, malware, and other dubious software bits at FULL SPEED!
Yay</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407042</id>
	<title>Re:Licensed per Core</title>
	<author>MessyBlob</author>
	<datestamp>1268049060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Try instead charging for software transactions, which is a useful measure of work done by the software. It's a bit like paying for fuel by the gallon, rather than charging different amounts depending on the size of engine in the car you drive.</htmltext>
<tokenext>Try instead charging for software transactions , which is a useful measure of work done by the software .
It 's a bit like paying for fuel by the gallon , rather than charging different amounts depending on the size of engine in the car you drive .</tokentext>
<sentencetext>Try instead charging for software transactions, which is a useful measure of work done by the software.
It's a bit like paying for fuel by the gallon, rather than charging different amounts depending on the size of engine in the car you drive.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406162</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406686</id>
	<title>Ditch x86</title>
	<author>MessyBlob</author>
	<datestamp>1268047740000</datestamp>
	<modclass>Troll</modclass>
	<modscore>0</modscore>
	<htmltext>x86 has been led into a blind alley. Time now for a redesign, to make an instruction set and execution model that doesn't waste 95\% of its cycles waiting for memory.</htmltext>
<tokenext>x86 has been led into a blind alley .
Time now for a redesign , to make an instruction set and execution model that does n't waste 95 \ % of its cycles waiting for memory .</tokentext>
<sentencetext>x86 has been led into a blind alley.
Time now for a redesign, to make an instruction set and execution model that doesn't waste 95\% of its cycles waiting for memory.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31420368</id>
	<title>Re:Balance</title>
	<author>petermgreen</author>
	<datestamp>1268134740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>Does it have the memory I/O bandwidth to keep up with the CPUs? </i><br>IIRC it has quad channel ram so i'd expect the answer to this to be similar to running a quad-core on dual channel ram.</p><p>At least according to toms hardware going from dual to triple channel on a quad core doesn't make much difference. so it seems one channel per two cores is about right with current tech.</p><p><i>When will I be able to actually buy a mother board with 8 of these 8 core CPUs</i><br>Dunno, I expect you will see the first boards and servers built arround this CPU arround release time (i'm sure the server and motherboard vendors already have engineering samples). I dunno if any of them will be 8-socket though.</p><p><i>and what kind of a frame rate would Crysis get on that rig?</i><br>Probablly not that much better than what it gets on current hardware.</p></htmltext>
<tokenext>Does it have the memory I/O bandwidth to keep up with the CPUs ?
IIRC it has quad channel ram so i 'd expect the answer to this to be similar to running a quad-core on dual channel ram.At least according to toms hardware going from dual to triple channel on a quad core does n't make much difference .
so it seems one channel per two cores is about right with current tech.When will I be able to actually buy a mother board with 8 of these 8 core CPUsDunno , I expect you will see the first boards and servers built arround this CPU arround release time ( i 'm sure the server and motherboard vendors already have engineering samples ) .
I dunno if any of them will be 8-socket though.and what kind of a frame rate would Crysis get on that rig ? Probablly not that much better than what it gets on current hardware .</tokentext>
<sentencetext>Does it have the memory I/O bandwidth to keep up with the CPUs?
IIRC it has quad channel ram so i'd expect the answer to this to be similar to running a quad-core on dual channel ram.At least according to toms hardware going from dual to triple channel on a quad core doesn't make much difference.
so it seems one channel per two cores is about right with current tech.When will I be able to actually buy a mother board with 8 of these 8 core CPUsDunno, I expect you will see the first boards and servers built arround this CPU arround release time (i'm sure the server and motherboard vendors already have engineering samples).
I dunno if any of them will be 8-socket though.and what kind of a frame rate would Crysis get on that rig?Probablly not that much better than what it gets on current hardware.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405740</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406336</id>
	<title>Hyperthreading</title>
	<author>MobyDisk</author>
	<datestamp>1268046660000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>Why are they are still announcing hyperthreading?  It was established long-ago that it had no benefit.  It's been off on any machines I've ever purchased.</p></htmltext>
<tokenext>Why are they are still announcing hyperthreading ?
It was established long-ago that it had no benefit .
It 's been off on any machines I 've ever purchased .</tokentext>
<sentencetext>Why are they are still announcing hyperthreading?
It was established long-ago that it had no benefit.
It's been off on any machines I've ever purchased.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31411132</id>
	<title>Re:Finally!</title>
	<author>evilWurst</author>
	<datestamp>1268127720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>&gt; In 15 years you might have a 1TB database running on your personal communicator that fits in your pocket. (in keeping with the "15 years out" prediction theme of the day.</p><p>Hmm. Applying one of the Moore's Law variants to NAND flash, if storage size for the same price doubles every 18 months, 15 years is 10 generations. 2^10 = 1024. 4-8 GB of flash memory is already relative cheap today, even in the form of a microSD card the size of a fingernail, so I'd be kind of disappointed if we didn't have 1 TB flash drives (or some other tech that eclipses flash) by 15 years from now.</p></htmltext>
<tokenext>&gt; In 15 years you might have a 1TB database running on your personal communicator that fits in your pocket .
( in keeping with the " 15 years out " prediction theme of the day.Hmm .
Applying one of the Moore 's Law variants to NAND flash , if storage size for the same price doubles every 18 months , 15 years is 10 generations .
2 ^ 10 = 1024 .
4-8 GB of flash memory is already relative cheap today , even in the form of a microSD card the size of a fingernail , so I 'd be kind of disappointed if we did n't have 1 TB flash drives ( or some other tech that eclipses flash ) by 15 years from now .</tokentext>
<sentencetext>&gt; In 15 years you might have a 1TB database running on your personal communicator that fits in your pocket.
(in keeping with the "15 years out" prediction theme of the day.Hmm.
Applying one of the Moore's Law variants to NAND flash, if storage size for the same price doubles every 18 months, 15 years is 10 generations.
2^10 = 1024.
4-8 GB of flash memory is already relative cheap today, even in the form of a microSD card the size of a fingernail, so I'd be kind of disappointed if we didn't have 1 TB flash drives (or some other tech that eclipses flash) by 15 years from now.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406860</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407552</id>
	<title>Re:programs compatible with 8 cores</title>
	<author>Anonymous</author>
	<datestamp>1268051400000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Comedy gold!</htmltext>
<tokenext>Comedy gold !</tokentext>
<sentencetext>Comedy gold!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406508</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31427174</id>
	<title>Re:Hyperthreading</title>
	<author>Bengie</author>
	<datestamp>1268241060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>HT on the i7. Bit of info.</p><p>HT on the p4 sucked because of lack of duplicated units. The P4 had a double pumped integer unit, so HT could dramatically speed up int based calculations, but the FP/SSE was not so if two threads were running at the same time and tried to access the FP, one thread would get the FP and the other would cause a stall. Because of this stall coupled with overhead of switching hardware threads, there was a performance loss.</p><p>Jump forward to the i7. Most everything is duplicated. The i7 has a shared SSE, but has separate add/div/mul FP units and duplicated integer units and a few others. This means when HT switches threads, there's a good chance that there will be no stall caused by a contended unit.</p><p>A prime example is databases. With the P4, the DB community at large recommended disabling HT. One reason was contended units, but also HT causes the cache to be effectively split, 1/2 for one thread and 1/2 for the other and DBs are cache sensitive. There have been several benchmarks showing the i7 crunching 1.5TB OLAP cubes with results of 200\%-400\% increase in speed clock-for-clock compared to previous gen Intels(aka core duo line).</p><p>Applications that are heavily SSE optimized will more than likely see a slowdown, but this is easily negated by simple optimizations to limit the app from running two threads on the same physical core. In a system/server that has mixed loads, HT could make a dramatic improvement.</p><p>I know with my i7, my grid/distributed computing programs claim ~2.7gigaflops per virtual cpu, which means 2flops/cycle per core. That means with HT, I'm getting 1flop per hardware thread. MUCH faster for SSE apps, but only on a per core basis, not per thread. Best bet would be to run 4 threads on SSE and 4 on regular Floats.</p><p>Both AMD and Intel can only do 1 DP Float or INT64 per cycle w/o SSE, but Intel with HT can do 2/cycle. So, overly simplified, it's 2xs faster. Most of the time you won't see 2x speed, but you will almost always see a decent amount more than w/o HT. FYI, AMD's future chips coming out next year will have something almost identical to HT.</p></htmltext>
<tokenext>HT on the i7 .
Bit of info.HT on the p4 sucked because of lack of duplicated units .
The P4 had a double pumped integer unit , so HT could dramatically speed up int based calculations , but the FP/SSE was not so if two threads were running at the same time and tried to access the FP , one thread would get the FP and the other would cause a stall .
Because of this stall coupled with overhead of switching hardware threads , there was a performance loss.Jump forward to the i7 .
Most everything is duplicated .
The i7 has a shared SSE , but has separate add/div/mul FP units and duplicated integer units and a few others .
This means when HT switches threads , there 's a good chance that there will be no stall caused by a contended unit.A prime example is databases .
With the P4 , the DB community at large recommended disabling HT .
One reason was contended units , but also HT causes the cache to be effectively split , 1/2 for one thread and 1/2 for the other and DBs are cache sensitive .
There have been several benchmarks showing the i7 crunching 1.5TB OLAP cubes with results of 200 \ % -400 \ % increase in speed clock-for-clock compared to previous gen Intels ( aka core duo line ) .Applications that are heavily SSE optimized will more than likely see a slowdown , but this is easily negated by simple optimizations to limit the app from running two threads on the same physical core .
In a system/server that has mixed loads , HT could make a dramatic improvement.I know with my i7 , my grid/distributed computing programs claim ~ 2.7gigaflops per virtual cpu , which means 2flops/cycle per core .
That means with HT , I 'm getting 1flop per hardware thread .
MUCH faster for SSE apps , but only on a per core basis , not per thread .
Best bet would be to run 4 threads on SSE and 4 on regular Floats.Both AMD and Intel can only do 1 DP Float or INT64 per cycle w/o SSE , but Intel with HT can do 2/cycle .
So , overly simplified , it 's 2xs faster .
Most of the time you wo n't see 2x speed , but you will almost always see a decent amount more than w/o HT .
FYI , AMD 's future chips coming out next year will have something almost identical to HT .</tokentext>
<sentencetext>HT on the i7.
Bit of info.HT on the p4 sucked because of lack of duplicated units.
The P4 had a double pumped integer unit, so HT could dramatically speed up int based calculations, but the FP/SSE was not so if two threads were running at the same time and tried to access the FP, one thread would get the FP and the other would cause a stall.
Because of this stall coupled with overhead of switching hardware threads, there was a performance loss.Jump forward to the i7.
Most everything is duplicated.
The i7 has a shared SSE, but has separate add/div/mul FP units and duplicated integer units and a few others.
This means when HT switches threads, there's a good chance that there will be no stall caused by a contended unit.A prime example is databases.
With the P4, the DB community at large recommended disabling HT.
One reason was contended units, but also HT causes the cache to be effectively split, 1/2 for one thread and 1/2 for the other and DBs are cache sensitive.
There have been several benchmarks showing the i7 crunching 1.5TB OLAP cubes with results of 200\%-400\% increase in speed clock-for-clock compared to previous gen Intels(aka core duo line).Applications that are heavily SSE optimized will more than likely see a slowdown, but this is easily negated by simple optimizations to limit the app from running two threads on the same physical core.
In a system/server that has mixed loads, HT could make a dramatic improvement.I know with my i7, my grid/distributed computing programs claim ~2.7gigaflops per virtual cpu, which means 2flops/cycle per core.
That means with HT, I'm getting 1flop per hardware thread.
MUCH faster for SSE apps, but only on a per core basis, not per thread.
Best bet would be to run 4 threads on SSE and 4 on regular Floats.Both AMD and Intel can only do 1 DP Float or INT64 per cycle w/o SSE, but Intel with HT can do 2/cycle.
So, overly simplified, it's 2xs faster.
Most of the time you won't see 2x speed, but you will almost always see a decent amount more than w/o HT.
FYI, AMD's future chips coming out next year will have something almost identical to HT.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406336</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406372</id>
	<title>Re:Balance</title>
	<author>BJ\_Covert\_Action</author>
	<datestamp>1268046780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>You are not going to put these in your gaming rig</p></div><p>
Tell that to the gamers with Epeen insecurities.</p></div>
	</htmltext>
<tokenext>You are not going to put these in your gaming rig Tell that to the gamers with Epeen insecurities .</tokentext>
<sentencetext>You are not going to put these in your gaming rig
Tell that to the gamers with Epeen insecurities.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405868</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408770</id>
	<title>Cost?</title>
	<author>Anonymous</author>
	<datestamp>1268059020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I don't see anyone talking about the cost of this, any ideas?</htmltext>
<tokenext>I do n't see anyone talking about the cost of this , any ideas ?</tokentext>
<sentencetext>I don't see anyone talking about the cost of this, any ideas?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408376</id>
	<title>Re:Ditch x86</title>
	<author>Klintus Fang</author>
	<datestamp>1268056260000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>People have been arguing as you are that x86's bloated CISC instruction set was inferior to a cleaner RISC architecture for the last 20+ years.  Nobody has ever proven that the elegance of the instruction set matters with hard data though.</p><p>What evidence we do have goes against that argument.</p><p>Apple machines used a cleaner RISC architecture for a while in the desktop space.  They never performed any better than equivalent x86 based machines, and in the end Apple abandoned RISC and moved to x86.</p><p>Intel came out with a cleaner RISC based instruction set that that the Itanium line uses.  If x86 was really as bad as you say, Itanium chips would be running circles around the x86 based server chips provided by both Intel and AMD.  That isn't happenning.</p><p>Another thing you might not realize:  all x86 chips, from both Intel and AMD, once you strip them down to the micro-code level ARE RISC designs under the hood.  RISC is the cleaner way to implement the micro code and the underlying execution architecture, but all historical data seems to indicate that the question of whether the instruction set that sits on top of that is RISC or CISC is irrelevant to performance.  It is arguably more complicated to design a CISC based chip like x86, but that clearly has not been an obstacle to competing with RISC on the performance end for Intel or AMD engineers.</p></htmltext>
<tokenext>People have been arguing as you are that x86 's bloated CISC instruction set was inferior to a cleaner RISC architecture for the last 20 + years .
Nobody has ever proven that the elegance of the instruction set matters with hard data though.What evidence we do have goes against that argument.Apple machines used a cleaner RISC architecture for a while in the desktop space .
They never performed any better than equivalent x86 based machines , and in the end Apple abandoned RISC and moved to x86.Intel came out with a cleaner RISC based instruction set that that the Itanium line uses .
If x86 was really as bad as you say , Itanium chips would be running circles around the x86 based server chips provided by both Intel and AMD .
That is n't happenning.Another thing you might not realize : all x86 chips , from both Intel and AMD , once you strip them down to the micro-code level ARE RISC designs under the hood .
RISC is the cleaner way to implement the micro code and the underlying execution architecture , but all historical data seems to indicate that the question of whether the instruction set that sits on top of that is RISC or CISC is irrelevant to performance .
It is arguably more complicated to design a CISC based chip like x86 , but that clearly has not been an obstacle to competing with RISC on the performance end for Intel or AMD engineers .</tokentext>
<sentencetext>People have been arguing as you are that x86's bloated CISC instruction set was inferior to a cleaner RISC architecture for the last 20+ years.
Nobody has ever proven that the elegance of the instruction set matters with hard data though.What evidence we do have goes against that argument.Apple machines used a cleaner RISC architecture for a while in the desktop space.
They never performed any better than equivalent x86 based machines, and in the end Apple abandoned RISC and moved to x86.Intel came out with a cleaner RISC based instruction set that that the Itanium line uses.
If x86 was really as bad as you say, Itanium chips would be running circles around the x86 based server chips provided by both Intel and AMD.
That isn't happenning.Another thing you might not realize:  all x86 chips, from both Intel and AMD, once you strip them down to the micro-code level ARE RISC designs under the hood.
RISC is the cleaner way to implement the micro code and the underlying execution architecture, but all historical data seems to indicate that the question of whether the instruction set that sits on top of that is RISC or CISC is irrelevant to performance.
It is arguably more complicated to design a CISC based chip like x86, but that clearly has not been an obstacle to competing with RISC on the performance end for Intel or AMD engineers.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406686</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407636</id>
	<title>Re:It's obvious</title>
	<author>StikyPad</author>
	<datestamp>1268051880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I thought the obvious answer would be octa-porn.</p><p>And you're free to interpret that any way you like.</p></htmltext>
<tokenext>I thought the obvious answer would be octa-porn.And you 're free to interpret that any way you like .</tokentext>
<sentencetext>I thought the obvious answer would be octa-porn.And you're free to interpret that any way you like.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405680</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31410020</id>
	<title>Re:March of the penguins</title>
	<author>Anonymous</author>
	<datestamp>1268071020000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>meh... I have an 8-core box now that I've gleefully watched 8 penguins display on while booting linux. Has a pair of quad-core Xeons and 16GB of RAM (I run a lot of VMs in case you're wondering).</htmltext>
<tokenext>meh... I have an 8-core box now that I 've gleefully watched 8 penguins display on while booting linux .
Has a pair of quad-core Xeons and 16GB of RAM ( I run a lot of VMs in case you 're wondering ) .</tokentext>
<sentencetext>meh... I have an 8-core box now that I've gleefully watched 8 penguins display on while booting linux.
Has a pair of quad-core Xeons and 16GB of RAM (I run a lot of VMs in case you're wondering).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405630</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31409560</id>
	<title>Re:Hyperthreading</title>
	<author>Anonymous</author>
	<datestamp>1268065860000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Well written multi-threaded applications see a "hyperthreaded cpu" practically the same as a "non-hyperthreaded cpu". That has been my experience since the first hyperthreading chips first appeared. Poorly written multi-threaded applications suck no matter what kind they run on.</p></htmltext>
<tokenext>Well written multi-threaded applications see a " hyperthreaded cpu " practically the same as a " non-hyperthreaded cpu " .
That has been my experience since the first hyperthreading chips first appeared .
Poorly written multi-threaded applications suck no matter what kind they run on .</tokentext>
<sentencetext>Well written multi-threaded applications see a "hyperthreaded cpu" practically the same as a "non-hyperthreaded cpu".
That has been my experience since the first hyperthreading chips first appeared.
Poorly written multi-threaded applications suck no matter what kind they run on.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406738</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405868</id>
	<title>Re:Balance</title>
	<author>Anonymous</author>
	<datestamp>1268045160000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>These are target it the Virtualization and specialized application space.  You are not going to put these in your gaming rig, and your not going to use the +4 core models in your tranditional stand alone application server.  You could get much better dollar to performance ration elsewhere if those are your intended applications.</p><p>Now slapping two or more of these things on a Linux box with a ton of UMLs running or on VMware ESX, and loading the system up with 128 gigs of ram and a medium business can probalby run their entire datacenter on 2 boxen + an entry level SAN.</p></htmltext>
<tokenext>These are target it the Virtualization and specialized application space .
You are not going to put these in your gaming rig , and your not going to use the + 4 core models in your tranditional stand alone application server .
You could get much better dollar to performance ration elsewhere if those are your intended applications.Now slapping two or more of these things on a Linux box with a ton of UMLs running or on VMware ESX , and loading the system up with 128 gigs of ram and a medium business can probalby run their entire datacenter on 2 boxen + an entry level SAN .</tokentext>
<sentencetext>These are target it the Virtualization and specialized application space.
You are not going to put these in your gaming rig, and your not going to use the +4 core models in your tranditional stand alone application server.
You could get much better dollar to performance ration elsewhere if those are your intended applications.Now slapping two or more of these things on a Linux box with a ton of UMLs running or on VMware ESX, and loading the system up with 128 gigs of ram and a medium business can probalby run their entire datacenter on 2 boxen + an entry level SAN.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405740</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31411790</id>
	<title>What about Xeon ?</title>
	<author>vikingpower</author>
	<datestamp>1268138100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>One muses whether or not this also is the upcoming end of the Xeon line ?</p></htmltext>
<tokenext>One muses whether or not this also is the upcoming end of the Xeon line ?</tokentext>
<sentencetext>One muses whether or not this also is the upcoming end of the Xeon line ?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406106</id>
	<title>Re:programs compatible with 8 cores</title>
	<author>XanC</author>
	<datestamp>1268045940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Apache is popular.  MySQL is popular.  Pretty much any Web or DB server will eat these right up.</p></htmltext>
<tokenext>Apache is popular .
MySQL is popular .
Pretty much any Web or DB server will eat these right up .</tokentext>
<sentencetext>Apache is popular.
MySQL is popular.
Pretty much any Web or DB server will eat these right up.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405896</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407332</id>
	<title>Well a few reasons</title>
	<author>Sycraft-fu</author>
	<datestamp>1268050440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>First off it went away for a long time. The P4s had hyper threading but the Pentium Ds and Core 2s (duos and quads) didn't. It didn't come back until the i7.</p><p>The other reason is that it is useful now. When HT first came out, it was pretty much for desktop chips and we were still very much a single core world. Ok well little was designed to truly take advantage of multiple threads in that environment. People noticed no real speedup. However now not only are things better using multiple cores, but the server market is a target for this as well. On servers, multiple threads per core in hardware work well. You frequently get situations where you have processors that don't need much processor time, but need it often. The context switching can be killer in terms of overhead. More processes on the chip mitigates that can makes more efficient use of the silicon.</p><p>Sun is doing this to a much greater degree, in fact. Their new Ultrasparc processors run more than two threads per core. Probably not that useful on a desktop at this point but it can be very useful on a web server.</p><p>Hyperthreading is something likely to stick with us at this point. We are moving away from computers that only did one thing at a time, and simply switched back and forth between tasks and towards computers that do a whole lot in parallel.</p></htmltext>
<tokenext>First off it went away for a long time .
The P4s had hyper threading but the Pentium Ds and Core 2s ( duos and quads ) did n't .
It did n't come back until the i7.The other reason is that it is useful now .
When HT first came out , it was pretty much for desktop chips and we were still very much a single core world .
Ok well little was designed to truly take advantage of multiple threads in that environment .
People noticed no real speedup .
However now not only are things better using multiple cores , but the server market is a target for this as well .
On servers , multiple threads per core in hardware work well .
You frequently get situations where you have processors that do n't need much processor time , but need it often .
The context switching can be killer in terms of overhead .
More processes on the chip mitigates that can makes more efficient use of the silicon.Sun is doing this to a much greater degree , in fact .
Their new Ultrasparc processors run more than two threads per core .
Probably not that useful on a desktop at this point but it can be very useful on a web server.Hyperthreading is something likely to stick with us at this point .
We are moving away from computers that only did one thing at a time , and simply switched back and forth between tasks and towards computers that do a whole lot in parallel .</tokentext>
<sentencetext>First off it went away for a long time.
The P4s had hyper threading but the Pentium Ds and Core 2s (duos and quads) didn't.
It didn't come back until the i7.The other reason is that it is useful now.
When HT first came out, it was pretty much for desktop chips and we were still very much a single core world.
Ok well little was designed to truly take advantage of multiple threads in that environment.
People noticed no real speedup.
However now not only are things better using multiple cores, but the server market is a target for this as well.
On servers, multiple threads per core in hardware work well.
You frequently get situations where you have processors that don't need much processor time, but need it often.
The context switching can be killer in terms of overhead.
More processes on the chip mitigates that can makes more efficient use of the silicon.Sun is doing this to a much greater degree, in fact.
Their new Ultrasparc processors run more than two threads per core.
Probably not that useful on a desktop at this point but it can be very useful on a web server.Hyperthreading is something likely to stick with us at this point.
We are moving away from computers that only did one thing at a time, and simply switched back and forth between tasks and towards computers that do a whole lot in parallel.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406336</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405910</id>
	<title>IBM Power7 also has 8 cores</title>
	<author>Anonymous</author>
	<datestamp>1268045340000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext>If it is matter of core-war, IBM's latest Power7 also has 8 cores. It is actually based on 45nm technology compared to Intel's latest 32nm. What makes Power7 exciting is that it has on-die 32MB L3 cache. They achieved this by introducing eDRAM (embedded DRAM) in the technology. Both Nehalem-EX and Power7 are targeting low-end server market, so it should be interesting battle. <p>

<a href="http://arstechnica.com/hardware/news/2009/09/ibms-8-core-power7-twice-the-muscle-half-the-transistors.ars" title="arstechnica.com">http://arstechnica.com/hardware/news/2009/09/ibms-8-core-power7-twice-the-muscle-half-the-transistors.ars</a> [arstechnica.com]</p></htmltext>
<tokenext>If it is matter of core-war , IBM 's latest Power7 also has 8 cores .
It is actually based on 45nm technology compared to Intel 's latest 32nm .
What makes Power7 exciting is that it has on-die 32MB L3 cache .
They achieved this by introducing eDRAM ( embedded DRAM ) in the technology .
Both Nehalem-EX and Power7 are targeting low-end server market , so it should be interesting battle .
http : //arstechnica.com/hardware/news/2009/09/ibms-8-core-power7-twice-the-muscle-half-the-transistors.ars [ arstechnica.com ]</tokentext>
<sentencetext>If it is matter of core-war, IBM's latest Power7 also has 8 cores.
It is actually based on 45nm technology compared to Intel's latest 32nm.
What makes Power7 exciting is that it has on-die 32MB L3 cache.
They achieved this by introducing eDRAM (embedded DRAM) in the technology.
Both Nehalem-EX and Power7 are targeting low-end server market, so it should be interesting battle.
http://arstechnica.com/hardware/news/2009/09/ibms-8-core-power7-twice-the-muscle-half-the-transistors.ars [arstechnica.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407678</id>
	<title>Re:Balance</title>
	<author>Anonymous</author>
	<datestamp>1268052120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>with a ton of UMLs running</p></div></blockquote><p>UML?  Where have you been for the last five years?</p><p>KVM or <a href="http://en.wikipedia.org/wiki/Xen" title="wikipedia.org">Xen</a> [wikipedia.org] are where it's at on Linux.</p></div>
	</htmltext>
<tokenext>with a ton of UMLs runningUML ?
Where have you been for the last five years ? KVM or Xen [ wikipedia.org ] are where it 's at on Linux .</tokentext>
<sentencetext>with a ton of UMLs runningUML?
Where have you been for the last five years?KVM or Xen [wikipedia.org] are where it's at on Linux.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405868</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406282</id>
	<title>Re:IBM Power7 also has 8 cores</title>
	<author>blind biker</author>
	<datestamp>1268046480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Sun's UltraSPARC T1 had 8 cores and a total of 32 concurrent threads, since 4 years ago. Best of all, that CPU is very low-power. Even better, it's completely open-source. You can download everything:<br>ISA specification<br>Verilog RTL source code of the design<br>Verification environment, diagnostics tests and simulation images</p></htmltext>
<tokenext>Sun 's UltraSPARC T1 had 8 cores and a total of 32 concurrent threads , since 4 years ago .
Best of all , that CPU is very low-power .
Even better , it 's completely open-source .
You can download everything : ISA specificationVerilog RTL source code of the designVerification environment , diagnostics tests and simulation images</tokentext>
<sentencetext>Sun's UltraSPARC T1 had 8 cores and a total of 32 concurrent threads, since 4 years ago.
Best of all, that CPU is very low-power.
Even better, it's completely open-source.
You can download everything:ISA specificationVerilog RTL source code of the designVerification environment, diagnostics tests and simulation images</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405910</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31409234</id>
	<title>Re:Finally!</title>
	<author>Hurricane78</author>
	<datestamp>1268062980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It&rsquo;s a bit bad no say that it always is 15 years. It&rsquo;s exponential. The goal is growing exponentially. That 15 years it took to raise processing power to where you could run it on a phone, is maybe a couple of weeks nowadays. That gives you a better feeling for it. ^^<br>Of course the 1TB in 15 years still fits.</p></htmltext>
<tokenext>It    s a bit bad no say that it always is 15 years .
It    s exponential .
The goal is growing exponentially .
That 15 years it took to raise processing power to where you could run it on a phone , is maybe a couple of weeks nowadays .
That gives you a better feeling for it .
^ ^ Of course the 1TB in 15 years still fits .</tokentext>
<sentencetext>It’s a bit bad no say that it always is 15 years.
It’s exponential.
The goal is growing exponentially.
That 15 years it took to raise processing power to where you could run it on a phone, is maybe a couple of weeks nowadays.
That gives you a better feeling for it.
^^Of course the 1TB in 15 years still fits.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406860</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408508</id>
	<title>Re:IBM Power7 also has 8 cores</title>
	<author>maitas</author>
	<datestamp>1268057220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If EX delivers the expected performance, it will have the same performance per socket than Power7, but half the threads. I prefer the better single thread performance of EX than Power7.</p></htmltext>
<tokenext>If EX delivers the expected performance , it will have the same performance per socket than Power7 , but half the threads .
I prefer the better single thread performance of EX than Power7 .</tokentext>
<sentencetext>If EX delivers the expected performance, it will have the same performance per socket than Power7, but half the threads.
I prefer the better single thread performance of EX than Power7.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405910</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406944</id>
	<title>Re:Hyperthreading</title>
	<author>Spatial</author>
	<datestamp>1268048640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Long ago?  CPU architectures aren't static.  A Nahalem isn't exactly a Northwood you know.<br> <br>

It can make a big difference to some applications, like 3D renderers.  Sometimes it doesn't help, but disabling it without considering the typical load is unwise.</htmltext>
<tokenext>Long ago ?
CPU architectures are n't static .
A Nahalem is n't exactly a Northwood you know .
It can make a big difference to some applications , like 3D renderers .
Sometimes it does n't help , but disabling it without considering the typical load is unwise .</tokentext>
<sentencetext>Long ago?
CPU architectures aren't static.
A Nahalem isn't exactly a Northwood you know.
It can make a big difference to some applications, like 3D renderers.
Sometimes it doesn't help, but disabling it without considering the typical load is unwise.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406336</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407668</id>
	<title>This again!</title>
	<author>Singularity42</author>
	<datestamp>1268052060000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I get enough hate of x86 in the supermarket tabloids--it's every other story!</p></htmltext>
<tokenext>I get enough hate of x86 in the supermarket tabloids--it 's every other story !</tokentext>
<sentencetext>I get enough hate of x86 in the supermarket tabloids--it's every other story!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406686</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405736</id>
	<title>minimum hardware required....</title>
	<author>rts008</author>
	<datestamp>1268044620000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>Now we know what will be needed to run Win 8, I guess.<br>I better get started on my backyard fusion power plant....;-)</p></htmltext>
<tokenext>Now we know what will be needed to run Win 8 , I guess.I better get started on my backyard fusion power plant.... ; - )</tokentext>
<sentencetext>Now we know what will be needed to run Win 8, I guess.I better get started on my backyard fusion power plant....;-)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405876</id>
	<title>Re:Balance</title>
	<author>fuzzyfuzzyfungus</author>
	<datestamp>1268045160000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext>Given that the Nehalems all have integrated memory controllers, I'd assume that the memory I/O situation wouldn't become substantially worse as you scaled up.<br> <br>

From TFS's mention of "up to 8 CPUs <i>or more with third-party node controllers</i>" I'm(perhaps optimistically) assuming that that means all the RAM in an up to 8 socket system wouldn't be more than one hop away from any core.<br> <br>

They almost certainly didn't go with 24MB of cache because their main memory situation is <i>perfect</i>; but intel's bigger chips are substantially improved from the old "Hey, let's hang a bunch of super expensive Xeons off a dubiously adequate northbridge through a shared front-side bus, let them starve for memory access, and then get curb stomped by cheaper Opterons!" days.</htmltext>
<tokenext>Given that the Nehalems all have integrated memory controllers , I 'd assume that the memory I/O situation would n't become substantially worse as you scaled up .
From TFS 's mention of " up to 8 CPUs or more with third-party node controllers " I 'm ( perhaps optimistically ) assuming that that means all the RAM in an up to 8 socket system would n't be more than one hop away from any core .
They almost certainly did n't go with 24MB of cache because their main memory situation is perfect ; but intel 's bigger chips are substantially improved from the old " Hey , let 's hang a bunch of super expensive Xeons off a dubiously adequate northbridge through a shared front-side bus , let them starve for memory access , and then get curb stomped by cheaper Opterons !
" days .</tokentext>
<sentencetext>Given that the Nehalems all have integrated memory controllers, I'd assume that the memory I/O situation wouldn't become substantially worse as you scaled up.
From TFS's mention of "up to 8 CPUs or more with third-party node controllers" I'm(perhaps optimistically) assuming that that means all the RAM in an up to 8 socket system wouldn't be more than one hop away from any core.
They almost certainly didn't go with 24MB of cache because their main memory situation is perfect; but intel's bigger chips are substantially improved from the old "Hey, let's hang a bunch of super expensive Xeons off a dubiously adequate northbridge through a shared front-side bus, let them starve for memory access, and then get curb stomped by cheaper Opterons!
" days.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405740</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406860</id>
	<title>Re:Finally!</title>
	<author>Cytotoxic</author>
	<datestamp>1268048280000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext>Even funnier, soon enough you'll be running Crysis on your cell phone (or whatever we call it then).  Remember when it was tough to get decent framerate on Doom with high settings?  You can run that on a cellphone these days.  15 years from "state of the art" to "runs on my cellphone."   Wow.   In 15 years you might have a 1TB database running on your personal communicator that fits in your pocket.  (in keeping with the "15 years out" prediction theme of the day.</htmltext>
<tokenext>Even funnier , soon enough you 'll be running Crysis on your cell phone ( or whatever we call it then ) .
Remember when it was tough to get decent framerate on Doom with high settings ?
You can run that on a cellphone these days .
15 years from " state of the art " to " runs on my cellphone .
" Wow .
In 15 years you might have a 1TB database running on your personal communicator that fits in your pocket .
( in keeping with the " 15 years out " prediction theme of the day .</tokentext>
<sentencetext>Even funnier, soon enough you'll be running Crysis on your cell phone (or whatever we call it then).
Remember when it was tough to get decent framerate on Doom with high settings?
You can run that on a cellphone these days.
15 years from "state of the art" to "runs on my cellphone.
"   Wow.
In 15 years you might have a 1TB database running on your personal communicator that fits in your pocket.
(in keeping with the "15 years out" prediction theme of the day.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405824</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406772</id>
	<title>Re:When will Moore's Law apply to Cores?</title>
	<author>Anonymous</author>
	<datestamp>1268047980000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext>Surely that'll be Core's Law?</htmltext>
<tokenext>Surely that 'll be Core 's Law ?</tokentext>
<sentencetext>Surely that'll be Core's Law?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406254</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406700</id>
	<title>Re:Hyperthreading</title>
	<author>Anonymous</author>
	<datestamp>1268047740000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Depends on what you run on the machine I suppose.  I run climateprediction.net on a i7 920 running Linux and get better performance running 8 tasks on 8 cores vs. 4 tasks on 4 cores.  Many others on climateprediction.net report the same thing for Linux (don't know about Windows).</p></htmltext>
<tokenext>Depends on what you run on the machine I suppose .
I run climateprediction.net on a i7 920 running Linux and get better performance running 8 tasks on 8 cores vs. 4 tasks on 4 cores .
Many others on climateprediction.net report the same thing for Linux ( do n't know about Windows ) .</tokentext>
<sentencetext>Depends on what you run on the machine I suppose.
I run climateprediction.net on a i7 920 running Linux and get better performance running 8 tasks on 8 cores vs. 4 tasks on 4 cores.
Many others on climateprediction.net report the same thing for Linux (don't know about Windows).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406336</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405988</id>
	<title>Re:Balance</title>
	<author>Anonymous</author>
	<datestamp>1268045580000</datestamp>
	<modclass>Redundant</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>Does it have the memory I/O bandwidth to keep up with the CPUs? When will I be able to actually buy a mother board with 8 of these 8 core CPUs, and what kind of a frame rate would Crysis get on that rig?</p></div><p>Crysis will get the same framerate as before, because FPS is not due to CPU, its due to GPU.</p><p>Go ahead and spend $1000 on a CPU, and i'll spend $1000 on two $500 GPU's with an E8400 or I5.  Then we'll see who gets the better framerate</p></div>
	</htmltext>
<tokenext>Does it have the memory I/O bandwidth to keep up with the CPUs ?
When will I be able to actually buy a mother board with 8 of these 8 core CPUs , and what kind of a frame rate would Crysis get on that rig ? Crysis will get the same framerate as before , because FPS is not due to CPU , its due to GPU.Go ahead and spend $ 1000 on a CPU , and i 'll spend $ 1000 on two $ 500 GPU 's with an E8400 or I5 .
Then we 'll see who gets the better framerate</tokentext>
<sentencetext>Does it have the memory I/O bandwidth to keep up with the CPUs?
When will I be able to actually buy a mother board with 8 of these 8 core CPUs, and what kind of a frame rate would Crysis get on that rig?Crysis will get the same framerate as before, because FPS is not due to CPU, its due to GPU.Go ahead and spend $1000 on a CPU, and i'll spend $1000 on two $500 GPU's with an E8400 or I5.
Then we'll see who gets the better framerate
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405740</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406128</id>
	<title>Re:programs compatible with 8 cores</title>
	<author>Alastor187</author>
	<datestamp>1268046000000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>I am sure there are plenty of applications out there that can take advantage of this new hardware.  I run finite element and computational fluid dynamics software at work and both are capable of using the 8 cores in my work PC (dual quad core).</p><p>The really sad part though is that for the FEA software I can only use 2 cores because the vendor requires customers to buy a separate HPC license for every processor/core beyond 2.</p></htmltext>
<tokenext>I am sure there are plenty of applications out there that can take advantage of this new hardware .
I run finite element and computational fluid dynamics software at work and both are capable of using the 8 cores in my work PC ( dual quad core ) .The really sad part though is that for the FEA software I can only use 2 cores because the vendor requires customers to buy a separate HPC license for every processor/core beyond 2 .</tokentext>
<sentencetext>I am sure there are plenty of applications out there that can take advantage of this new hardware.
I run finite element and computational fluid dynamics software at work and both are capable of using the 8 cores in my work PC (dual quad core).The really sad part though is that for the FEA software I can only use 2 cores because the vendor requires customers to buy a separate HPC license for every processor/core beyond 2.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405896</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31411556</id>
	<title>What's the point?</title>
	<author>Unnamed Chickenheart</author>
	<datestamp>1268134740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>With Supreme Commander 2 being the disappointment it is*, what's the need of 8-core?  =-D</p><p>* It runs on XBOX. It's dumbed down and it looks and plays like SC&amp;C  ( Supreme Command and Conquer). Though the game is not all bad. It seems to be what C&amp;C should have been.</p></htmltext>
<tokenext>With Supreme Commander 2 being the disappointment it is * , what 's the need of 8-core ?
= -D * It runs on XBOX .
It 's dumbed down and it looks and plays like SC&amp;C ( Supreme Command and Conquer ) .
Though the game is not all bad .
It seems to be what C&amp;C should have been .</tokentext>
<sentencetext>With Supreme Commander 2 being the disappointment it is*, what's the need of 8-core?
=-D* It runs on XBOX.
It's dumbed down and it looks and plays like SC&amp;C  ( Supreme Command and Conquer).
Though the game is not all bad.
It seems to be what C&amp;C should have been.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31410902</id>
	<title>What I would do with 8 physical cores, double them</title>
	<author>frambris</author>
	<datestamp>1268167380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Mmmm.... a bunch of HP DL360s with two of those in each. Yummmmm....</p><p>/ Server Nerd</p></htmltext>
<tokenext>Mmmm.... a bunch of HP DL360s with two of those in each .
Yummmmm..../ Server Nerd</tokentext>
<sentencetext>Mmmm.... a bunch of HP DL360s with two of those in each.
Yummmmm..../ Server Nerd</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406186</id>
	<title>Re:programs compatible with 8 cores</title>
	<author>hoytak</author>
	<datestamp>1268046180000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Don't know about games, but many types of numerical processing can easily take advantage of this.  ATLAS and other high-performance linear algebra libraries already use all available cores (no, IO is often not the biggest bottleneck with these libraries, as they seem to squeeze out all possible advantages from the L1 / L2 caches).  In other words, for my scientific computations, I would definitely notice a difference.</p><p>Also, OpenMP is becoming easier and easier to use with recent gcc releases, and it only takes a few #pragma statements in some parts of the code to give a huge speedup if you know what you're doing and have appropriate code.</p></htmltext>
<tokenext>Do n't know about games , but many types of numerical processing can easily take advantage of this .
ATLAS and other high-performance linear algebra libraries already use all available cores ( no , IO is often not the biggest bottleneck with these libraries , as they seem to squeeze out all possible advantages from the L1 / L2 caches ) .
In other words , for my scientific computations , I would definitely notice a difference.Also , OpenMP is becoming easier and easier to use with recent gcc releases , and it only takes a few # pragma statements in some parts of the code to give a huge speedup if you know what you 're doing and have appropriate code .</tokentext>
<sentencetext>Don't know about games, but many types of numerical processing can easily take advantage of this.
ATLAS and other high-performance linear algebra libraries already use all available cores (no, IO is often not the biggest bottleneck with these libraries, as they seem to squeeze out all possible advantages from the L1 / L2 caches).
In other words, for my scientific computations, I would definitely notice a difference.Also, OpenMP is becoming easier and easier to use with recent gcc releases, and it only takes a few #pragma statements in some parts of the code to give a huge speedup if you know what you're doing and have appropriate code.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405896</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406244</id>
	<title>can't believe nobody mentioned this by now...</title>
	<author>Anonymous</author>
	<datestamp>1268046420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>"Transcoding pr0n."</p></htmltext>
<tokenext>" Transcoding pr0n .
"</tokentext>
<sentencetext>"Transcoding pr0n.
"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406550</id>
	<title>AMD's competitor, Socket G32</title>
	<author>yuhong</author>
	<datestamp>1268047320000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext>In other news, AMD has a blog article on it's soon to be launched competitor to this, Socket G32 8-core/12-core Opterons:<br>
<a href="http://blogs.amd.com/work/2010/02/22/magny-cours-is-right-on-schedule-and-shipping-to-customers/" title="amd.com" rel="nofollow">http://blogs.amd.com/work/2010/02/22/magny-cours-is-right-on-schedule-and-shipping-to-customers/</a> [amd.com]</htmltext>
<tokenext>In other news , AMD has a blog article on it 's soon to be launched competitor to this , Socket G32 8-core/12-core Opterons : http : //blogs.amd.com/work/2010/02/22/magny-cours-is-right-on-schedule-and-shipping-to-customers/ [ amd.com ]</tokentext>
<sentencetext>In other news, AMD has a blog article on it's soon to be launched competitor to this, Socket G32 8-core/12-core Opterons:
http://blogs.amd.com/work/2010/02/22/magny-cours-is-right-on-schedule-and-shipping-to-customers/ [amd.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31410822</id>
	<title>Re:Actually...</title>
	<author>vertinox</author>
	<datestamp>1268166480000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>It will improve gaming performance if you happened to be running something like <a href="http://en.wikipedia.org/wiki/Quake\_Wars:\_Ray\_Traced" title="wikipedia.org">Quakes Wars in ray tracing</a> [wikipedia.org].</p><p>Intel put together a demo on a workstation system with two Nehalem  quad-core CPUs getting about 15 - 20 fps.</p><p>Since ray tracing is <a href="http://en.wikipedia.org/wiki/Embarrassingly\_parallel" title="wikipedia.org">embarrassingly parallel</a> [wikipedia.org], all one needs to do to improve performance is to throw more cores at it.</p><p>Keep in mind ray tracing is much more cpu intensive than gpu intensive...</p></htmltext>
<tokenext>It will improve gaming performance if you happened to be running something like Quakes Wars in ray tracing [ wikipedia.org ] .Intel put together a demo on a workstation system with two Nehalem quad-core CPUs getting about 15 - 20 fps.Since ray tracing is embarrassingly parallel [ wikipedia.org ] , all one needs to do to improve performance is to throw more cores at it.Keep in mind ray tracing is much more cpu intensive than gpu intensive.. .</tokentext>
<sentencetext>It will improve gaming performance if you happened to be running something like Quakes Wars in ray tracing [wikipedia.org].Intel put together a demo on a workstation system with two Nehalem  quad-core CPUs getting about 15 - 20 fps.Since ray tracing is embarrassingly parallel [wikipedia.org], all one needs to do to improve performance is to throw more cores at it.Keep in mind ray tracing is much more cpu intensive than gpu intensive...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405868</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407642</id>
	<title>Re:Hyperthreading</title>
	<author>imashination</author>
	<datestamp>1268051940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Maybe test things rather than just guessing? HT is a ~20\% speedup for anyone doing 3D rendering. <a href="http://www.cbscores.com/" title="cbscores.com" rel="nofollow">http://www.cbscores.com/</a> [cbscores.com]</htmltext>
<tokenext>Maybe test things rather than just guessing ?
HT is a ~ 20 \ % speedup for anyone doing 3D rendering .
http : //www.cbscores.com/ [ cbscores.com ]</tokentext>
<sentencetext>Maybe test things rather than just guessing?
HT is a ~20\% speedup for anyone doing 3D rendering.
http://www.cbscores.com/ [cbscores.com]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406336</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408014</id>
	<title>Re:Somebody's gotta ask...</title>
	<author>weaponsfree</author>
	<datestamp>1268053920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>So, how soon until newegg.com has the fake ones in stock?</p></div><p>For the uninitiated:
<a href="http://www.overclockers.com/forums/showthread.php?p=6422425" title="overclockers.com" rel="nofollow">http://www.overclockers.com/forums/showthread.php?p=6422425</a> [overclockers.com]</p></div>
	</htmltext>
<tokenext>So , how soon until newegg.com has the fake ones in stock ? For the uninitiated : http : //www.overclockers.com/forums/showthread.php ? p = 6422425 [ overclockers.com ]</tokentext>
<sentencetext>So, how soon until newegg.com has the fake ones in stock?For the uninitiated:
http://www.overclockers.com/forums/showthread.php?p=6422425 [overclockers.com]
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407540</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407668
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406686
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31410020
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405630
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407940
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405630
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407642
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406336
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405740
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408492
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407540
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_56</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406376
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406162
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_57</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405836
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405680
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_48</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31409408
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405910
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_51</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408090
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405824
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406772
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406254
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407678
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405868
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405740
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406700
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406336
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408508
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405910
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31410674
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405680
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31409510
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405868
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405740
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_49</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406288
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405910
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407200
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405740
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_43</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405814
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405680
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_50</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405916
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405740
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_46</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31411524
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408376
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406686
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_40</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31412074
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408376
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406686
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31409234
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406860
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405824
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31412484
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406336
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_37</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408014
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407540
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406106
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405896
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406706
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406336
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406128
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405896
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_41</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407332
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406336
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_55</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407042
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406162
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408184
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405896
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_36</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31411132
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406860
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405824
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31421536
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406860
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405824
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406282
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405910
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_35</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407930
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405630
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407826
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405868
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405740
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31409560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406738
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406336
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_54</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407082
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405868
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405740
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_53</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31412070
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406254
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_44</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406694
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405630
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406710
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406336
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31409798
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405824
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31427174
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406336
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405982
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405740
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407636
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405680
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406186
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405896
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31410822
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405868
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405740
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_47</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31409940
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405910
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_52</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408358
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406254
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_42</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31420368
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405740
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407688
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405824
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406372
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405868
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405740
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_39</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407552
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406508
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405896
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31420458
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405876
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405740
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408206
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406254
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406944
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406336
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_45</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31411380
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406336
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_08_2049235_38</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31409558
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408770
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_08_2049235.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31411790
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_08_2049235.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407540
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408014
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408492
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_08_2049235.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405740
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407200
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405876
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31420458
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405916
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405988
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405982
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405868
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407082
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407826
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407678
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406372
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31410822
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31409510
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31420368
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_08_2049235.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406686
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407668
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408376
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31411524
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31412074
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_08_2049235.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405896
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406106
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408184
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406186
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406508
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407552
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406128
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_08_2049235.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406162
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406376
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407042
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_08_2049235.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405824
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31409798
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406860
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31409234
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31421536
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31411132
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408090
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407688
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_08_2049235.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408770
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31409558
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_08_2049235.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405910
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31409408
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406282
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31409940
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408508
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406288
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_08_2049235.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406254
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406772
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31412070
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408206
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31408358
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_08_2049235.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406374
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_08_2049235.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405736
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_08_2049235.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405630
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407930
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31410020
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407940
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406694
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_08_2049235.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406336
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31427174
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406710
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407642
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406706
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406700
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31412484
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406944
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31406738
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31409560
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407332
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31411380
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_08_2049235.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405680
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31410674
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405814
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31407636
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_08_2049235.31405836
</commentlist>
</conversation>
