<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_06_02_0051235</id>
	<title>AMD's Six-Core Istanbul Opterons</title>
	<author>kdawson</author>
	<datestamp>1243944960000</datestamp>
	<htmltext>EconolineCrush writes <i>"AMD's latest 'Istanbul' Opterons add two cores per socket, for a grand total of six. Despite the extra cores, these new chips reside within the same power envelope as existing quad-core Opterons, and they're drop-in compatible with current systems. The Tech Report has an <a href="http://techreport.com/articles.x/17005">in-depth review of the new chips</a>, comparing their performance and power efficiency with that of Intel's Nehalem-based Xeons. Istanbul fares surprisingly well, particularly when one considers its <a href="http://techreport.com/articles.x/17005/6">performance-power ratio</a> with highly parallelized workloads."</i></htmltext>
<tokenext>EconolineCrush writes " AMD 's latest 'Istanbul ' Opterons add two cores per socket , for a grand total of six .
Despite the extra cores , these new chips reside within the same power envelope as existing quad-core Opterons , and they 're drop-in compatible with current systems .
The Tech Report has an in-depth review of the new chips , comparing their performance and power efficiency with that of Intel 's Nehalem-based Xeons .
Istanbul fares surprisingly well , particularly when one considers its performance-power ratio with highly parallelized workloads .
"</tokentext>
<sentencetext>EconolineCrush writes "AMD's latest 'Istanbul' Opterons add two cores per socket, for a grand total of six.
Despite the extra cores, these new chips reside within the same power envelope as existing quad-core Opterons, and they're drop-in compatible with current systems.
The Tech Report has an in-depth review of the new chips, comparing their performance and power efficiency with that of Intel's Nehalem-based Xeons.
Istanbul fares surprisingly well, particularly when one considers its performance-power ratio with highly parallelized workloads.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28187615</id>
	<title>Re:Another test at anandtech.com</title>
	<author>MobyDisk</author>
	<datestamp>1243936320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Are they seriously touting hyperthreading as a benefit?  It's a dubious-enough feature, but with 4 cores, it really stretches believability.  I dare someone to find the one application that benefits from seeing 2 additional fake CPUs when there are already 4 real ones.</p></htmltext>
<tokenext>Are they seriously touting hyperthreading as a benefit ?
It 's a dubious-enough feature , but with 4 cores , it really stretches believability .
I dare someone to find the one application that benefits from seeing 2 additional fake CPUs when there are already 4 real ones .</tokentext>
<sentencetext>Are they seriously touting hyperthreading as a benefit?
It's a dubious-enough feature, but with 4 cores, it really stretches believability.
I dare someone to find the one application that benefits from seeing 2 additional fake CPUs when there are already 4 real ones.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181029</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182233</id>
	<title>Underutilized</title>
	<author>noppy</author>
	<datestamp>1243957440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>How many of your favorite app already re-written to take advantage of the additional cores?<br>How many of your favorite compiler already re-designed to generate codes that uses additional cores?<br>How many of your favorite boss already re-wired to fuss about the additional cores?</p></htmltext>
<tokenext>How many of your favorite app already re-written to take advantage of the additional cores ? How many of your favorite compiler already re-designed to generate codes that uses additional cores ? How many of your favorite boss already re-wired to fuss about the additional cores ?</tokentext>
<sentencetext>How many of your favorite app already re-written to take advantage of the additional cores?How many of your favorite compiler already re-designed to generate codes that uses additional cores?How many of your favorite boss already re-wired to fuss about the additional cores?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180477</id>
	<title>Fuck Everything, We're Doing Six Cores</title>
	<author>Anonymous</author>
	<datestamp>1243949460000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext>You think it's crazy? It is crazy. But I don't give a shit. From now on, we're the ones who have the edge in the multi-core game. What part of this don't you understand? If two cores is good, and four cores is better, obviously six cores would make us the best fucking processor that ever existed. Comprende? We didn't claw our way to the top of the CPU game by clinging to the two-core industry standard. We got here by taking chances. Well, six cores is the biggest chance of all. <br> <br>

Here's the report from Engineering. Someone put it in the bathroom: I want to wipe my ass with it. They don't tell me what to inventI tell them. And I'm telling them to stick two more cores in there. I don't care how. Make the cores so thin they're invisible. I don't care if they have to cram the sixth blade in perpendicular to the other five, just do it!</htmltext>
<tokenext>You think it 's crazy ?
It is crazy .
But I do n't give a shit .
From now on , we 're the ones who have the edge in the multi-core game .
What part of this do n't you understand ?
If two cores is good , and four cores is better , obviously six cores would make us the best fucking processor that ever existed .
Comprende ? We did n't claw our way to the top of the CPU game by clinging to the two-core industry standard .
We got here by taking chances .
Well , six cores is the biggest chance of all .
Here 's the report from Engineering .
Someone put it in the bathroom : I want to wipe my ass with it .
They do n't tell me what to inventI tell them .
And I 'm telling them to stick two more cores in there .
I do n't care how .
Make the cores so thin they 're invisible .
I do n't care if they have to cram the sixth blade in perpendicular to the other five , just do it !</tokentext>
<sentencetext>You think it's crazy?
It is crazy.
But I don't give a shit.
From now on, we're the ones who have the edge in the multi-core game.
What part of this don't you understand?
If two cores is good, and four cores is better, obviously six cores would make us the best fucking processor that ever existed.
Comprende? We didn't claw our way to the top of the CPU game by clinging to the two-core industry standard.
We got here by taking chances.
Well, six cores is the biggest chance of all.
Here's the report from Engineering.
Someone put it in the bathroom: I want to wipe my ass with it.
They don't tell me what to inventI tell them.
And I'm telling them to stick two more cores in there.
I don't care how.
Make the cores so thin they're invisible.
I don't care if they have to cram the sixth blade in perpendicular to the other five, just do it!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180891</id>
	<title>Re:Fuck Everything, We're Doing Six Cores</title>
	<author>Anonymous</author>
	<datestamp>1243951800000</datestamp>
	<modclass>Redundant</modclass>
	<modscore>-1</modscore>
	<htmltext><p>Sounds exactly like Gillette saying "<a href="http://www.theonion.com/content/node/33930" title="theonion.com" rel="nofollow">Fuck Everything, We're Doing Five Blades</a> [theonion.com]"</p></htmltext>
<tokenext>Sounds exactly like Gillette saying " Fuck Everything , We 're Doing Five Blades [ theonion.com ] "</tokentext>
<sentencetext>Sounds exactly like Gillette saying "Fuck Everything, We're Doing Five Blades [theonion.com]"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180477</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181035</id>
	<title>Re:</title>
	<author>chrispitude</author>
	<datestamp>1243952640000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>0</modscore>
	<htmltext>And nothing of value was posted.</htmltext>
<tokenext>And nothing of value was posted .</tokentext>
<sentencetext>And nothing of value was posted.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28194065</id>
	<title>Re:Istanbul runs your shells</title>
	<author>Phoghat</author>
	<datestamp>1244035080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>My pappy said, "Son, you're gonna' drive me to drinkin'
If you don't stop drivin' that Hot Rod Lincoln"....</htmltext>
<tokenext>My pappy said , " Son , you 're gon na ' drive me to drinkin ' If you do n't stop drivin ' that Hot Rod Lincoln " ... .</tokentext>
<sentencetext>My pappy said, "Son, you're gonna' drive me to drinkin'
If you don't stop drivin' that Hot Rod Lincoln"....</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180349</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180389</id>
	<title>Wasn't it called Constantinople?</title>
	<author>Anonymous</author>
	<datestamp>1243948800000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext>Or isn't that anyones business but the Turks?</htmltext>
<tokenext>Or is n't that anyones business but the Turks ?</tokentext>
<sentencetext>Or isn't that anyones business but the Turks?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180823</id>
	<title>Re:Fuck Everything, We're Doing Six Cores</title>
	<author>Anonymous</author>
	<datestamp>1243951440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>flamebait? That was the first thing I thought of when I read the headline.</p><p><a href="http://www.boingboing.net/2005/09/14/gillettes-5blade-raz.html" title="boingboing.net" rel="nofollow">http://www.boingboing.net/2005/09/14/gillettes-5blade-raz.html</a> [boingboing.net]</p><p>The first core calculates close, the second even closer,<nobr> <wbr></nobr>...</p></htmltext>
<tokenext>flamebait ?
That was the first thing I thought of when I read the headline.http : //www.boingboing.net/2005/09/14/gillettes-5blade-raz.html [ boingboing.net ] The first core calculates close , the second even closer , .. .</tokentext>
<sentencetext>flamebait?
That was the first thing I thought of when I read the headline.http://www.boingboing.net/2005/09/14/gillettes-5blade-raz.html [boingboing.net]The first core calculates close, the second even closer, ...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180477</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182041</id>
	<title>Unfortunate theological implications</title>
	<author>Anonymous</author>
	<datestamp>1243956780000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>So if somebody builds a cluster using three of these six core processors, does that make it the Beowulf Cluster of the Beast?</p></htmltext>
<tokenext>So if somebody builds a cluster using three of these six core processors , does that make it the Beowulf Cluster of the Beast ?</tokentext>
<sentencetext>So if somebody builds a cluster using three of these six core processors, does that make it the Beowulf Cluster of the Beast?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28184185</id>
	<title>Re:No.</title>
	<author>nabsltd</author>
	<datestamp>1243964880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>My 3.5ghz Pentium 4 with the useless multithreading turned off kicks the crap out of HD Video rendering than anything else.</p></div><p>Try a Core i7 at around 3GHz and be amazed.</p><p>If you have a multi-threaded app, you get 8 really usable threads.  Running six copies of LAME at the same time, I can convert a 12-track CD into MP3 in about 60 seconds on a 3.33GHz i7 920.</p><p>Video conversion is similarly speedy, although HD isn't as good without a lot of memory, too, as there's a lot more bits to move around.</p></div>
	</htmltext>
<tokenext>My 3.5ghz Pentium 4 with the useless multithreading turned off kicks the crap out of HD Video rendering than anything else.Try a Core i7 at around 3GHz and be amazed.If you have a multi-threaded app , you get 8 really usable threads .
Running six copies of LAME at the same time , I can convert a 12-track CD into MP3 in about 60 seconds on a 3.33GHz i7 920.Video conversion is similarly speedy , although HD is n't as good without a lot of memory , too , as there 's a lot more bits to move around .</tokentext>
<sentencetext>My 3.5ghz Pentium 4 with the useless multithreading turned off kicks the crap out of HD Video rendering than anything else.Try a Core i7 at around 3GHz and be amazed.If you have a multi-threaded app, you get 8 really usable threads.
Running six copies of LAME at the same time, I can convert a 12-track CD into MP3 in about 60 seconds on a 3.33GHz i7 920.Video conversion is similarly speedy, although HD isn't as good without a lot of memory, too, as there's a lot more bits to move around.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182123</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182177</id>
	<title>Re:But...</title>
	<author>scotch</author>
	<datestamp>1243957260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>That's nothing compared to 14 cores.</p></div><p>You are bad at math.</p></div>
	</htmltext>
<tokenext>That 's nothing compared to 14 cores.You are bad at math .</tokentext>
<sentencetext>That's nothing compared to 14 cores.You are bad at math.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180443</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180547</id>
	<title>Do i need Erlang?</title>
	<author>Anonymous</author>
	<datestamp>1243949760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Harnessing muli-cpu machines with these installed is going to be.... Interesting.</p></htmltext>
<tokenext>Harnessing muli-cpu machines with these installed is going to be.... Interesting .</tokentext>
<sentencetext>Harnessing muli-cpu machines with these installed is going to be.... Interesting.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180555</id>
	<title>NoKtUrNaL005</title>
	<author>Anonymous</author>
	<datestamp>1243949760000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><p>this crap is fucking crazy how can they do this</p></htmltext>
<tokenext>this crap is fucking crazy how can they do this</tokentext>
<sentencetext>this crap is fucking crazy how can they do this</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180723</id>
	<title>No.</title>
	<author>Timothy Brownawell</author>
	<datestamp>1243950840000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p><div class="quote"><p>Harnessing muli-cpu machines with these installed is going to be.... Interesting.</p></div><p>No more interesting than existing many-core machines.</p><p>Seriously, having a couple dozen or more cores is nothing new.</p></div>
	</htmltext>
<tokenext>Harnessing muli-cpu machines with these installed is going to be.... Interesting.No more interesting than existing many-core machines.Seriously , having a couple dozen or more cores is nothing new .</tokentext>
<sentencetext>Harnessing muli-cpu machines with these installed is going to be.... Interesting.No more interesting than existing many-core machines.Seriously, having a couple dozen or more cores is nothing new.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180547</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182123</id>
	<title>Re:No.</title>
	<author>Lumpy</author>
	<datestamp>1243957080000</datestamp>
	<modclass>Troll</modclass>
	<modscore>0</modscore>
	<htmltext><p>Nope but it sucks for anything processor intensive.</p><p>My 3.5ghz Pentium 4 with the useless multithreading turned off kicks the crap out of HD Video rendering than anything else.   I can view full res REDOne video on it smooth without hickups,   I CANT on a Quad core 2.2ghz box.</p><p>They need to get the core speeds back up.  Even in gaming the Single core old crap with a higher clock speed kicks the new stuff.</p><p>6 cores rocks for SQL or anything that is SMP capable and highly multithreaded.  But for the stuff that is single thread design or needs brute force you cant beat a high clock rate cingle core.</p></htmltext>
<tokenext>Nope but it sucks for anything processor intensive.My 3.5ghz Pentium 4 with the useless multithreading turned off kicks the crap out of HD Video rendering than anything else .
I can view full res REDOne video on it smooth without hickups , I CANT on a Quad core 2.2ghz box.They need to get the core speeds back up .
Even in gaming the Single core old crap with a higher clock speed kicks the new stuff.6 cores rocks for SQL or anything that is SMP capable and highly multithreaded .
But for the stuff that is single thread design or needs brute force you cant beat a high clock rate cingle core .</tokentext>
<sentencetext>Nope but it sucks for anything processor intensive.My 3.5ghz Pentium 4 with the useless multithreading turned off kicks the crap out of HD Video rendering than anything else.
I can view full res REDOne video on it smooth without hickups,   I CANT on a Quad core 2.2ghz box.They need to get the core speeds back up.
Even in gaming the Single core old crap with a higher clock speed kicks the new stuff.6 cores rocks for SQL or anything that is SMP capable and highly multithreaded.
But for the stuff that is single thread design or needs brute force you cant beat a high clock rate cingle core.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180723</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181759</id>
	<title>Re:Istanbul runs your shells</title>
	<author>Lumpy</author>
	<datestamp>1243956000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>But wasn't Istanbul called Constantinople?</p><p>And what do the Turks think about that?</p></htmltext>
<tokenext>But was n't Istanbul called Constantinople ? And what do the Turks think about that ?</tokentext>
<sentencetext>But wasn't Istanbul called Constantinople?And what do the Turks think about that?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180349</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28184829</id>
	<title>Re:Fuck Everything, We're Doing Six Cores</title>
	<author>Gldm</author>
	<datestamp>1243967460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>And suddenly my sig is relevant again.<nobr> <wbr></nobr>;)</htmltext>
<tokenext>And suddenly my sig is relevant again .
; )</tokentext>
<sentencetext>And suddenly my sig is relevant again.
;)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180477</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28187771</id>
	<title>Re:No.</title>
	<author>Anonymous</author>
	<datestamp>1243936860000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Nehalem processors dynamically scale.  If only two of the four cores are being used, those two will get more juice and the frequency will be higher.  The same if only one core is being used.  Power all but shutdown to the idle cores.</p></htmltext>
<tokenext>Nehalem processors dynamically scale .
If only two of the four cores are being used , those two will get more juice and the frequency will be higher .
The same if only one core is being used .
Power all but shutdown to the idle cores .</tokentext>
<sentencetext>Nehalem processors dynamically scale.
If only two of the four cores are being used, those two will get more juice and the frequency will be higher.
The same if only one core is being used.
Power all but shutdown to the idle cores.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182123</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28184101</id>
	<title>Re:Scary Quote from the Article</title>
	<author>dpilot</author>
	<datestamp>1243964580000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>I'm in the silicon business.  Not CPU, but still silicon.</p><p>It sounds as if AMD budgeted time for another pass at the design, and turned out not to need it.  The amount of time they pulled out of the schedule looks more like a silicon pass than short-cutting testing and validation.  Adding that extra pass, and making sure it was scheduled is probably a result of having been so badly burned last time, but that's good.  You can always be a hero by doing better than plan.</p></htmltext>
<tokenext>I 'm in the silicon business .
Not CPU , but still silicon.It sounds as if AMD budgeted time for another pass at the design , and turned out not to need it .
The amount of time they pulled out of the schedule looks more like a silicon pass than short-cutting testing and validation .
Adding that extra pass , and making sure it was scheduled is probably a result of having been so badly burned last time , but that 's good .
You can always be a hero by doing better than plan .</tokentext>
<sentencetext>I'm in the silicon business.
Not CPU, but still silicon.It sounds as if AMD budgeted time for another pass at the design, and turned out not to need it.
The amount of time they pulled out of the schedule looks more like a silicon pass than short-cutting testing and validation.
Adding that extra pass, and making sure it was scheduled is probably a result of having been so badly burned last time, but that's good.
You can always be a hero by doing better than plan.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181193</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180349</id>
	<title>Istanbul runs your shells</title>
	<author>smitty\_one\_each</author>
	<datestamp>1243948560000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext>Istanbul runs your shells<br>
Through shaves as tight as Dardanelles.<br>
Use Opteron and the gallant foamy,<br>
And thus avoid <a href="http://en.wikipedia.org/wiki/Gallipoli\_Campaign" title="wikipedia.org">Gallipoli</a> [wikipedia.org].<br>
<b>Burma Shave</b></htmltext>
<tokenext>Istanbul runs your shells Through shaves as tight as Dardanelles .
Use Opteron and the gallant foamy , And thus avoid Gallipoli [ wikipedia.org ] .
Burma Shave</tokentext>
<sentencetext>Istanbul runs your shells
Through shaves as tight as Dardanelles.
Use Opteron and the gallant foamy,
And thus avoid Gallipoli [wikipedia.org].
Burma Shave</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28191459</id>
	<title>Re:Another test at anandtech.com</title>
	<author>physburn</author>
	<datestamp>1243960020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I'm most surprised that AMDs extra two cores didn't give it an advantage in many of the server
applications, I know that the Xeons are 4 way superscalar (instructions running in the pipeline
in each core) versus AMDs 3 way. So as the article said its only 18 AMD instructions per
clock versus 16 intels, instead of 4 versus 3. But this is only for the shorter instructions. 8 core
xeons are expected in autumn so any tenuous lead AMD has anywhere in performance is
going to disappear fairly soon. But never-mind, AMD still can win on price and expandability.
I'm running dual core, dual socket on my server at the moment, so slipping in a couple
of new instanbuls or shanghis is a instant double or tripling of power.

<p>

<a href="http://www.feeddistiller.com/blogs/CPUs/feed.html" title="feeddistiller.com" rel="nofollow">CPU feed</a> [feeddistiller.com] @ <a href="http://www.feeddistiller.com/" title="feeddistiller.com" rel="nofollow">Feed Distiller</a> [feeddistiller.com]</p></htmltext>
<tokenext>I 'm most surprised that AMDs extra two cores did n't give it an advantage in many of the server applications , I know that the Xeons are 4 way superscalar ( instructions running in the pipeline in each core ) versus AMDs 3 way .
So as the article said its only 18 AMD instructions per clock versus 16 intels , instead of 4 versus 3 .
But this is only for the shorter instructions .
8 core xeons are expected in autumn so any tenuous lead AMD has anywhere in performance is going to disappear fairly soon .
But never-mind , AMD still can win on price and expandability .
I 'm running dual core , dual socket on my server at the moment , so slipping in a couple of new instanbuls or shanghis is a instant double or tripling of power .
CPU feed [ feeddistiller.com ] @ Feed Distiller [ feeddistiller.com ]</tokentext>
<sentencetext>I'm most surprised that AMDs extra two cores didn't give it an advantage in many of the server
applications, I know that the Xeons are 4 way superscalar (instructions running in the pipeline
in each core) versus AMDs 3 way.
So as the article said its only 18 AMD instructions per
clock versus 16 intels, instead of 4 versus 3.
But this is only for the shorter instructions.
8 core
xeons are expected in autumn so any tenuous lead AMD has anywhere in performance is
going to disappear fairly soon.
But never-mind, AMD still can win on price and expandability.
I'm running dual core, dual socket on my server at the moment, so slipping in a couple
of new instanbuls or shanghis is a instant double or tripling of power.
CPU feed [feeddistiller.com] @ Feed Distiller [feeddistiller.com]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181029</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181029</id>
	<title>Another test at anandtech.com</title>
	<author>Anonymous</author>
	<datestamp>1243952640000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p><a href="http://it.anandtech.com/IT/showdoc.aspx?i=3571" title="anandtech.com">http://it.anandtech.com/IT/showdoc.aspx?i=3571</a> [anandtech.com]</p><p>Includes information about virtualization performance: <a href="http://it.anandtech.com/IT/showdoc.aspx?i=3571&amp;p=9" title="anandtech.com">http://it.anandtech.com/IT/showdoc.aspx?i=3571&amp;p=9</a> [anandtech.com]</p><p>Conclusion:<br>"The six-core Opteron is not an alternative to the mighty Xeons in every application. The Xeons are more versatile thanks to the higher clockspeeds, higher IPC, Hyperthreading and higher bandwidth to memory. The Xeon 55xx series is clearly the better choice in OLTP, ERP, webserving, rendering and there is little doubt that it will continue to reign in the bandwidth intensive HPC workloads. There are two types of applications where we feel that the AMD six-core deserves your attention: decision support databases and virtualization."</p></htmltext>
<tokenext>http : //it.anandtech.com/IT/showdoc.aspx ? i = 3571 [ anandtech.com ] Includes information about virtualization performance : http : //it.anandtech.com/IT/showdoc.aspx ? i = 3571&amp;p = 9 [ anandtech.com ] Conclusion : " The six-core Opteron is not an alternative to the mighty Xeons in every application .
The Xeons are more versatile thanks to the higher clockspeeds , higher IPC , Hyperthreading and higher bandwidth to memory .
The Xeon 55xx series is clearly the better choice in OLTP , ERP , webserving , rendering and there is little doubt that it will continue to reign in the bandwidth intensive HPC workloads .
There are two types of applications where we feel that the AMD six-core deserves your attention : decision support databases and virtualization .
"</tokentext>
<sentencetext>http://it.anandtech.com/IT/showdoc.aspx?i=3571 [anandtech.com]Includes information about virtualization performance: http://it.anandtech.com/IT/showdoc.aspx?i=3571&amp;p=9 [anandtech.com]Conclusion:"The six-core Opteron is not an alternative to the mighty Xeons in every application.
The Xeons are more versatile thanks to the higher clockspeeds, higher IPC, Hyperthreading and higher bandwidth to memory.
The Xeon 55xx series is clearly the better choice in OLTP, ERP, webserving, rendering and there is little doubt that it will continue to reign in the bandwidth intensive HPC workloads.
There are two types of applications where we feel that the AMD six-core deserves your attention: decision support databases and virtualization.
"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182991</id>
	<title>Re:Scary Quote from the Article</title>
	<author>Narishma</author>
	<datestamp>1243960020000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>From an interview with <a href="http://www.bit-tech.net/news/hardware/2009/06/01/amd-launches-6-core-istanbul-opteron-proces/1" title="bit-tech.net">bit-tech</a> [bit-tech.net]:<p><div class="quote"><p> <b>bit-tech</b>: Has the launch of Istanbul been brought forward in response to Nehalem EX's updated launch date?</p><p>
<b>Patler</b>: Istanbul being pulled in by five months is a result of excellent execution by our design and manufacturing teams who were about to take it from first stepping of silicon to production. Also, the fact that Istanbul is based on our existing socket infrastructure, enables our OEMs to save time on validation cycles that are normally associated with a new processor that delivers the performance Istanbul can.</p></div></div>
	</htmltext>
<tokenext>From an interview with bit-tech [ bit-tech.net ] : bit-tech : Has the launch of Istanbul been brought forward in response to Nehalem EX 's updated launch date ?
Patler : Istanbul being pulled in by five months is a result of excellent execution by our design and manufacturing teams who were about to take it from first stepping of silicon to production .
Also , the fact that Istanbul is based on our existing socket infrastructure , enables our OEMs to save time on validation cycles that are normally associated with a new processor that delivers the performance Istanbul can .</tokentext>
<sentencetext>From an interview with bit-tech [bit-tech.net]: bit-tech: Has the launch of Istanbul been brought forward in response to Nehalem EX's updated launch date?
Patler: Istanbul being pulled in by five months is a result of excellent execution by our design and manufacturing teams who were about to take it from first stepping of silicon to production.
Also, the fact that Istanbul is based on our existing socket infrastructure, enables our OEMs to save time on validation cycles that are normally associated with a new processor that delivers the performance Istanbul can.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181193</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28200651</id>
	<title>You can have it</title>
	<author>Anonymous</author>
	<datestamp>1244021400000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Is your disk I/O more bandwidth-intensive (smaller amount of large operations/files) or operations-intensive (lots of small operations)? Actually, either way you might want to check out Fusion-io's productions (Product page - http://www.fusionio.com/Products.aspx; spec sheet1 - http://www.fusionio.com/PDFs/Fusion\_Specsheet.pdf; spec sheet2 - http://www.fusionio.com/PDFs/Fusion\_ioDriveDuo\_datasheet\_v3.pdf). It's pricey, but you said you're not afraid to throw money at it<nobr> <wbr></nobr>:)</p><p>Performance specs from the glossies for a 160GB unit:</p><p>IOPS in mixed r/w: 120,000+ (4k packet size)<br>Read Bandwidth: 1.0 GB/s (32K packet size)<br>Write Bandwidth: 1.5 GB/s (32K packet size)</p><p>Yes, I did mean one hundred twenty THOUSAND IOPS (try beating that with SAS!) and the bandwidth is in gigaBYTES per second, not gigaBITS. Yes, the IOPS rating is for a single unit, not an array of any kind. And yes, these are measured speeds, not interface speeds (SATAII has a 3 Gbps interface speed, but try to actually squeeze all of that bandwidth out of there on a sustained basis). No, I don't work for these guys, but this kind of storage performance just blows my mind. I can't wait to see a 10 Gbps SAN slapped full of 16 or 20 of these things...</p><p>Cost is about $20/GB.</p></htmltext>
<tokenext>Is your disk I/O more bandwidth-intensive ( smaller amount of large operations/files ) or operations-intensive ( lots of small operations ) ?
Actually , either way you might want to check out Fusion-io 's productions ( Product page - http : //www.fusionio.com/Products.aspx ; spec sheet1 - http : //www.fusionio.com/PDFs/Fusion \ _Specsheet.pdf ; spec sheet2 - http : //www.fusionio.com/PDFs/Fusion \ _ioDriveDuo \ _datasheet \ _v3.pdf ) .
It 's pricey , but you said you 're not afraid to throw money at it : ) Performance specs from the glossies for a 160GB unit : IOPS in mixed r/w : 120,000 + ( 4k packet size ) Read Bandwidth : 1.0 GB/s ( 32K packet size ) Write Bandwidth : 1.5 GB/s ( 32K packet size ) Yes , I did mean one hundred twenty THOUSAND IOPS ( try beating that with SAS !
) and the bandwidth is in gigaBYTES per second , not gigaBITS .
Yes , the IOPS rating is for a single unit , not an array of any kind .
And yes , these are measured speeds , not interface speeds ( SATAII has a 3 Gbps interface speed , but try to actually squeeze all of that bandwidth out of there on a sustained basis ) .
No , I do n't work for these guys , but this kind of storage performance just blows my mind .
I ca n't wait to see a 10 Gbps SAN slapped full of 16 or 20 of these things...Cost is about $ 20/GB .</tokentext>
<sentencetext>Is your disk I/O more bandwidth-intensive (smaller amount of large operations/files) or operations-intensive (lots of small operations)?
Actually, either way you might want to check out Fusion-io's productions (Product page - http://www.fusionio.com/Products.aspx; spec sheet1 - http://www.fusionio.com/PDFs/Fusion\_Specsheet.pdf; spec sheet2 - http://www.fusionio.com/PDFs/Fusion\_ioDriveDuo\_datasheet\_v3.pdf).
It's pricey, but you said you're not afraid to throw money at it :)Performance specs from the glossies for a 160GB unit:IOPS in mixed r/w: 120,000+ (4k packet size)Read Bandwidth: 1.0 GB/s (32K packet size)Write Bandwidth: 1.5 GB/s (32K packet size)Yes, I did mean one hundred twenty THOUSAND IOPS (try beating that with SAS!
) and the bandwidth is in gigaBYTES per second, not gigaBITS.
Yes, the IOPS rating is for a single unit, not an array of any kind.
And yes, these are measured speeds, not interface speeds (SATAII has a 3 Gbps interface speed, but try to actually squeeze all of that bandwidth out of there on a sustained basis).
No, I don't work for these guys, but this kind of storage performance just blows my mind.
I can't wait to see a 10 Gbps SAN slapped full of 16 or 20 of these things...Cost is about $20/GB.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28184593</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28191281</id>
	<title>Re:No.</title>
	<author>toddestan</author>
	<datestamp>1243958580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>One thing that could help is fast memory.  I've found dual channel DDR400 to be faster than stuff like DDR2-533, which may be the speed you'll find in a low-end newer multi-core machine.  And since 3.5Ghz is not actually a speed of P4 Intel ever sold, if he's not making stuff up and is running an overclocked P4 then his bus speeds may be even faster than that.  Though I don't get turning off hyperthreading - I've found that the P4 generally performed much better with it on than off, unless you're running one of those few programs that run into trouble with it on (rare, but I've seen them).</p></htmltext>
<tokenext>One thing that could help is fast memory .
I 've found dual channel DDR400 to be faster than stuff like DDR2-533 , which may be the speed you 'll find in a low-end newer multi-core machine .
And since 3.5Ghz is not actually a speed of P4 Intel ever sold , if he 's not making stuff up and is running an overclocked P4 then his bus speeds may be even faster than that .
Though I do n't get turning off hyperthreading - I 've found that the P4 generally performed much better with it on than off , unless you 're running one of those few programs that run into trouble with it on ( rare , but I 've seen them ) .</tokentext>
<sentencetext>One thing that could help is fast memory.
I've found dual channel DDR400 to be faster than stuff like DDR2-533, which may be the speed you'll find in a low-end newer multi-core machine.
And since 3.5Ghz is not actually a speed of P4 Intel ever sold, if he's not making stuff up and is running an overclocked P4 then his bus speeds may be even faster than that.
Though I don't get turning off hyperthreading - I've found that the P4 generally performed much better with it on than off, unless you're running one of those few programs that run into trouble with it on (rare, but I've seen them).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28183715</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181193</id>
	<title>Scary Quote from the Article</title>
	<author>Gazzonyx</author>
	<datestamp>1243953540000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p><div class="quote"><p>[...] Not only that, but it's hitting the market early. AMD had originally planned to introduce this product in the October time frame, but the first spin of Istanbul silicon came back solid, so the firm pulled the launch forward into June. Even with the accelerated schedule, of course, Istanbul comes not a moment too soon, now that Nehalem Xeons are out in the wild.</p></div><p>Does anyone else think that this seems a little convenient?  I'm really hoping that they didn't just tone down the testing to make it to market.  I'm thinking they'll go to market and then quickly release a new revision to fix the corners that they cut the first time around.  I hope I'm wrong, but AMD has been slipping lately.<br> <br>  Any EE's out there know the process well enough to confirm or deny my suspicions?</p></div>
	</htmltext>
<tokenext>[ ... ] Not only that , but it 's hitting the market early .
AMD had originally planned to introduce this product in the October time frame , but the first spin of Istanbul silicon came back solid , so the firm pulled the launch forward into June .
Even with the accelerated schedule , of course , Istanbul comes not a moment too soon , now that Nehalem Xeons are out in the wild.Does anyone else think that this seems a little convenient ?
I 'm really hoping that they did n't just tone down the testing to make it to market .
I 'm thinking they 'll go to market and then quickly release a new revision to fix the corners that they cut the first time around .
I hope I 'm wrong , but AMD has been slipping lately .
Any EE 's out there know the process well enough to confirm or deny my suspicions ?</tokentext>
<sentencetext>[...] Not only that, but it's hitting the market early.
AMD had originally planned to introduce this product in the October time frame, but the first spin of Istanbul silicon came back solid, so the firm pulled the launch forward into June.
Even with the accelerated schedule, of course, Istanbul comes not a moment too soon, now that Nehalem Xeons are out in the wild.Does anyone else think that this seems a little convenient?
I'm really hoping that they didn't just tone down the testing to make it to market.
I'm thinking they'll go to market and then quickly release a new revision to fix the corners that they cut the first time around.
I hope I'm wrong, but AMD has been slipping lately.
Any EE's out there know the process well enough to confirm or deny my suspicions?
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28187467</id>
	<title>Re:Scary Quote from the Article</title>
	<author>Anonymous</author>
	<datestamp>1243935660000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Actually, testing was increased for 6Core... As our 4Core tests no longer stressed a system with 50\% more cores the same way.</p><p>What changed was Process.  6Core uses all of the 'good' tech from Shanghai, then implements a few things differently (rev upgrades, etc).  The reason 6Core launched soo quickly, is we learned all of our lessons on the initial quad core fiasco.  We did things 'right' this time, and the result is... a launch date that is nearly 12mos ahead of the initial schedule (which was set 2yrs ago).</p><p>Personally, I trust the 6Core parts moreso than our 4Core parts... but maybe that's cause i've been testing the 6Core parts for over 8 mos and have had realatively few problems.</p><p>Trust me, we can't afford to fail, so there's no way they're going to cut corners.  The last thing we want is another Cache Disable fiasco.  Mark my anonymous word, 6Core is a fully tested and mother approved processor.</p></htmltext>
<tokenext>Actually , testing was increased for 6Core... As our 4Core tests no longer stressed a system with 50 \ % more cores the same way.What changed was Process .
6Core uses all of the 'good ' tech from Shanghai , then implements a few things differently ( rev upgrades , etc ) .
The reason 6Core launched soo quickly , is we learned all of our lessons on the initial quad core fiasco .
We did things 'right ' this time , and the result is... a launch date that is nearly 12mos ahead of the initial schedule ( which was set 2yrs ago ) .Personally , I trust the 6Core parts moreso than our 4Core parts... but maybe that 's cause i 've been testing the 6Core parts for over 8 mos and have had realatively few problems.Trust me , we ca n't afford to fail , so there 's no way they 're going to cut corners .
The last thing we want is another Cache Disable fiasco .
Mark my anonymous word , 6Core is a fully tested and mother approved processor .</tokentext>
<sentencetext>Actually, testing was increased for 6Core... As our 4Core tests no longer stressed a system with 50\% more cores the same way.What changed was Process.
6Core uses all of the 'good' tech from Shanghai, then implements a few things differently (rev upgrades, etc).
The reason 6Core launched soo quickly, is we learned all of our lessons on the initial quad core fiasco.
We did things 'right' this time, and the result is... a launch date that is nearly 12mos ahead of the initial schedule (which was set 2yrs ago).Personally, I trust the 6Core parts moreso than our 4Core parts... but maybe that's cause i've been testing the 6Core parts for over 8 mos and have had realatively few problems.Trust me, we can't afford to fail, so there's no way they're going to cut corners.
The last thing we want is another Cache Disable fiasco.
Mark my anonymous word, 6Core is a fully tested and mother approved processor.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181193</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28184507</id>
	<title>Re:it's interesting, but not becase of 6c</title>
	<author>Vancorps</author>
	<datestamp>1243966260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Scaling vertically hasn't been a good idea for a long time unless you're app has trouble scaling horizontally. I'm in the process of creating a proposal with a back-end database cluster considering of 4-6 nodes. Now I could achieve the same horsepower by buying an 8s or a 4s server and not have to buy as many machines but 4s servers seem to be 3 times more expensive than 2s socket servers so I can just buy more dual processor servers and scale out to achieve the target number of connections served. </p><p>Of course more servers are harder to manage and there is more hardware that can go bad although you are more tolerant of failures so its quite the trade-off. </p></htmltext>
<tokenext>Scaling vertically has n't been a good idea for a long time unless you 're app has trouble scaling horizontally .
I 'm in the process of creating a proposal with a back-end database cluster considering of 4-6 nodes .
Now I could achieve the same horsepower by buying an 8s or a 4s server and not have to buy as many machines but 4s servers seem to be 3 times more expensive than 2s socket servers so I can just buy more dual processor servers and scale out to achieve the target number of connections served .
Of course more servers are harder to manage and there is more hardware that can go bad although you are more tolerant of failures so its quite the trade-off .</tokentext>
<sentencetext>Scaling vertically hasn't been a good idea for a long time unless you're app has trouble scaling horizontally.
I'm in the process of creating a proposal with a back-end database cluster considering of 4-6 nodes.
Now I could achieve the same horsepower by buying an 8s or a 4s server and not have to buy as many machines but 4s servers seem to be 3 times more expensive than 2s socket servers so I can just buy more dual processor servers and scale out to achieve the target number of connections served.
Of course more servers are harder to manage and there is more hardware that can go bad although you are more tolerant of failures so its quite the trade-off. </sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182865</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28183875</id>
	<title>Re:Fun fact: Istanbul was Constantinople</title>
	<author>MightyMartian</author>
	<datestamp>1243963500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>The Eastern Roman Empire based in Constantinople lasted as long as the Egyptian empire, but its citizens never felt the same feeling of continuity and stability that the ancient Egyptians felt.</p></div></blockquote><p>Huh?  If you take the founding of the Eastern Empire to be Constantine's moving the capital to Byzantium (renaming it Constantinople), that's 330AD to its fall in 1453.  Dynastic Egypt traditionally dates back to somewhere around 3100BC, and the end is usually marked as the fall of the Ptolemaic Dynasty in 30BC.  So the Byzantine Empire lasted a little over 1,100 years, whereas Egypt lasted three thousand years.</p></div>
	</htmltext>
<tokenext>The Eastern Roman Empire based in Constantinople lasted as long as the Egyptian empire , but its citizens never felt the same feeling of continuity and stability that the ancient Egyptians felt.Huh ?
If you take the founding of the Eastern Empire to be Constantine 's moving the capital to Byzantium ( renaming it Constantinople ) , that 's 330AD to its fall in 1453 .
Dynastic Egypt traditionally dates back to somewhere around 3100BC , and the end is usually marked as the fall of the Ptolemaic Dynasty in 30BC .
So the Byzantine Empire lasted a little over 1,100 years , whereas Egypt lasted three thousand years .</tokentext>
<sentencetext>The Eastern Roman Empire based in Constantinople lasted as long as the Egyptian empire, but its citizens never felt the same feeling of continuity and stability that the ancient Egyptians felt.Huh?
If you take the founding of the Eastern Empire to be Constantine's moving the capital to Byzantium (renaming it Constantinople), that's 330AD to its fall in 1453.
Dynastic Egypt traditionally dates back to somewhere around 3100BC, and the end is usually marked as the fall of the Ptolemaic Dynasty in 30BC.
So the Byzantine Empire lasted a little over 1,100 years, whereas Egypt lasted three thousand years.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180395</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182649</id>
	<title>In other news ...</title>
	<author>bkaul</author>
	<datestamp>1243958940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Energizer corporation is now seeking to purchase AMD and fold it into the Schick lineup, in order to one-up Gillette's vibrating razor.</htmltext>
<tokenext>Energizer corporation is now seeking to purchase AMD and fold it into the Schick lineup , in order to one-up Gillette 's vibrating razor .</tokentext>
<sentencetext>Energizer corporation is now seeking to purchase AMD and fold it into the Schick lineup, in order to one-up Gillette's vibrating razor.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180413</id>
	<title>Now where can I ...</title>
	<author>roger\_that</author>
	<datestamp>1243949040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>get a couple of these to test?  Sounds like we could get some pretty good number-crunching results.</p></htmltext>
<tokenext>get a couple of these to test ?
Sounds like we could get some pretty good number-crunching results .</tokentext>
<sentencetext>get a couple of these to test?
Sounds like we could get some pretty good number-crunching results.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28184107</id>
	<title>Re:No.</title>
	<author>Anonymous</author>
	<datestamp>1243964580000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I assure you my 3.6Ghz Core 2 Duo will cause severe harm to your 3.5Ghz P4.</p></htmltext>
<tokenext>I assure you my 3.6Ghz Core 2 Duo will cause severe harm to your 3.5Ghz P4 .</tokentext>
<sentencetext>I assure you my 3.6Ghz Core 2 Duo will cause severe harm to your 3.5Ghz P4.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182123</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181053</id>
	<title>Re:Do i need Erlang?</title>
	<author>LWATCDR</author>
	<datestamp>1243952700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Nope. This is a server CPU. Things like Database servers already scale. well.<br>Virtualazation by definition will scale well.<br>Or to put it in simple terms.<br>You know that old four server with 8 cores total? You can now replace it with a two socket machine with 12 cores total.<br>Or you know that four socket 16 core server? Well you can now upgrade that to a 24 core server.</p></htmltext>
<tokenext>Nope .
This is a server CPU .
Things like Database servers already scale .
well.Virtualazation by definition will scale well.Or to put it in simple terms.You know that old four server with 8 cores total ?
You can now replace it with a two socket machine with 12 cores total.Or you know that four socket 16 core server ?
Well you can now upgrade that to a 24 core server .</tokentext>
<sentencetext>Nope.
This is a server CPU.
Things like Database servers already scale.
well.Virtualazation by definition will scale well.Or to put it in simple terms.You know that old four server with 8 cores total?
You can now replace it with a two socket machine with 12 cores total.Or you know that four socket 16 core server?
Well you can now upgrade that to a 24 core server.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180547</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180763</id>
	<title>Re:Wasn't it called Constantinople?</title>
	<author>underqualified</author>
	<datestamp>1243951020000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext>no no no. that was the beta version.</htmltext>
<tokenext>no no no .
that was the beta version .</tokentext>
<sentencetext>no no no.
that was the beta version.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180389</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180707</id>
	<title>dont bullshit</title>
	<author>unity100</author>
	<datestamp>1243950720000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>1</modscore>
	<htmltext><p>egyptian civilization started well around 4000 BC and lasted until 400 BC. thats around 3600 years.</p><p>eastern roman empire is from ad 250 ish to ad 1453.</p><p>EVEN if you add entire roman empire history unto that, which makes from 500 BC to 1453 AD, it still makes 2000 years. doesnt come anywhere near egypt.</p></htmltext>
<tokenext>egyptian civilization started well around 4000 BC and lasted until 400 BC .
thats around 3600 years.eastern roman empire is from ad 250 ish to ad 1453.EVEN if you add entire roman empire history unto that , which makes from 500 BC to 1453 AD , it still makes 2000 years .
doesnt come anywhere near egypt .</tokentext>
<sentencetext>egyptian civilization started well around 4000 BC and lasted until 400 BC.
thats around 3600 years.eastern roman empire is from ad 250 ish to ad 1453.EVEN if you add entire roman empire history unto that, which makes from 500 BC to 1453 AD, it still makes 2000 years.
doesnt come anywhere near egypt.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180395</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28185087</id>
	<title>Re:No.</title>
	<author>PitaBred</author>
	<datestamp>1243968600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>You can only make a single horse so big. Eventually you'll need a team of horses to pull a wagon quickly. It's that the P4's pipeline and memory access pattern is highly tuned for streaming data like your REDOne (I'm assuming 4K? Or 3K? You don't really specify). Your quad-core 2.2GHz box could easily do it if you had a competent decoder that properly multi-threaded. I can decode 1080p on CPU with maybe 60\% total CPU usage on a 2.4GHz dual-core AMD chip with a multi-threaded H.264 decoder.</htmltext>
<tokenext>You can only make a single horse so big .
Eventually you 'll need a team of horses to pull a wagon quickly .
It 's that the P4 's pipeline and memory access pattern is highly tuned for streaming data like your REDOne ( I 'm assuming 4K ?
Or 3K ?
You do n't really specify ) .
Your quad-core 2.2GHz box could easily do it if you had a competent decoder that properly multi-threaded .
I can decode 1080p on CPU with maybe 60 \ % total CPU usage on a 2.4GHz dual-core AMD chip with a multi-threaded H.264 decoder .</tokentext>
<sentencetext>You can only make a single horse so big.
Eventually you'll need a team of horses to pull a wagon quickly.
It's that the P4's pipeline and memory access pattern is highly tuned for streaming data like your REDOne (I'm assuming 4K?
Or 3K?
You don't really specify).
Your quad-core 2.2GHz box could easily do it if you had a competent decoder that properly multi-threaded.
I can decode 1080p on CPU with maybe 60\% total CPU usage on a 2.4GHz dual-core AMD chip with a multi-threaded H.264 decoder.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182123</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28184593</id>
	<title>I'd rather have faster disk I/O</title>
	<author>PeeAitchPee</author>
	<datestamp>1243966620000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>We run a lot of commerical OCR (as in millions of images), which is extremely processor-intensive, disk-intensive, memory-intensive, you name it.  Our current main OCR server is a dual quad-core Xeon X5355 box with 16 GB of RAM.  Our OCR software multithreads and the processor is no longer the bottleneck -- it's now disk I/O.  While current drives continue to increase in size, their read / write speed is what keeps us from getting work done faster.  It now takes several orders of magnitude longer to build, and then export, for example, a 2 GB batch than it does to recognize it, and the holdup is entirely due to disk I/O.</p><p>SSDs help.  We recently upgraded our server's OS drive to two Intel Extreme 64GB SSDs in RAID 0 (also using part of the array as a "scratchpad" for the OCR batches), and that cut the disk I/O time approximately in half -- but we're still talking almost an hour for your typical 2 GB batch.  Time is money, and we'd gladly throw more money at faster infrastructure were it available.  SSDs are still way too expensive to replace our existing main storage arrays, though.</p><p>So, while I appreciate continuing work in processor speed and density, I'd say I'd rather see a commensurate increase (and reduction in cost!) in disk speed at this point.  Just my<nobr> <wbr></nobr>.02.</p></htmltext>
<tokenext>We run a lot of commerical OCR ( as in millions of images ) , which is extremely processor-intensive , disk-intensive , memory-intensive , you name it .
Our current main OCR server is a dual quad-core Xeon X5355 box with 16 GB of RAM .
Our OCR software multithreads and the processor is no longer the bottleneck -- it 's now disk I/O .
While current drives continue to increase in size , their read / write speed is what keeps us from getting work done faster .
It now takes several orders of magnitude longer to build , and then export , for example , a 2 GB batch than it does to recognize it , and the holdup is entirely due to disk I/O.SSDs help .
We recently upgraded our server 's OS drive to two Intel Extreme 64GB SSDs in RAID 0 ( also using part of the array as a " scratchpad " for the OCR batches ) , and that cut the disk I/O time approximately in half -- but we 're still talking almost an hour for your typical 2 GB batch .
Time is money , and we 'd gladly throw more money at faster infrastructure were it available .
SSDs are still way too expensive to replace our existing main storage arrays , though.So , while I appreciate continuing work in processor speed and density , I 'd say I 'd rather see a commensurate increase ( and reduction in cost !
) in disk speed at this point .
Just my .02 .</tokentext>
<sentencetext>We run a lot of commerical OCR (as in millions of images), which is extremely processor-intensive, disk-intensive, memory-intensive, you name it.
Our current main OCR server is a dual quad-core Xeon X5355 box with 16 GB of RAM.
Our OCR software multithreads and the processor is no longer the bottleneck -- it's now disk I/O.
While current drives continue to increase in size, their read / write speed is what keeps us from getting work done faster.
It now takes several orders of magnitude longer to build, and then export, for example, a 2 GB batch than it does to recognize it, and the holdup is entirely due to disk I/O.SSDs help.
We recently upgraded our server's OS drive to two Intel Extreme 64GB SSDs in RAID 0 (also using part of the array as a "scratchpad" for the OCR batches), and that cut the disk I/O time approximately in half -- but we're still talking almost an hour for your typical 2 GB batch.
Time is money, and we'd gladly throw more money at faster infrastructure were it available.
SSDs are still way too expensive to replace our existing main storage arrays, though.So, while I appreciate continuing work in processor speed and density, I'd say I'd rather see a commensurate increase (and reduction in cost!
) in disk speed at this point.
Just my .02.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182865</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28189727</id>
	<title>Re:Wasn't it called Constantinople?</title>
	<author>jonadab</author>
	<datestamp>1243946400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>It was called Byzantium when the project launched, but then it was changed to Constantinople when the CEO split the project in two and left half of it in charge of each of his two veeps.  (The other half of the project was the Rome chip, but that's another story.)  It didn't get renamed to Istanbul until after the hostile takeover.</htmltext>
<tokenext>It was called Byzantium when the project launched , but then it was changed to Constantinople when the CEO split the project in two and left half of it in charge of each of his two veeps .
( The other half of the project was the Rome chip , but that 's another story .
) It did n't get renamed to Istanbul until after the hostile takeover .</tokentext>
<sentencetext>It was called Byzantium when the project launched, but then it was changed to Constantinople when the CEO split the project in two and left half of it in charge of each of his two veeps.
(The other half of the project was the Rome chip, but that's another story.
)  It didn't get renamed to Istanbul until after the hostile takeover.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180389</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180395</id>
	<title>Fun fact: Istanbul was Constantinople</title>
	<author>BadAnalogyGuy</author>
	<datestamp>1243948860000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><p>The Eastern Roman Empire based in Constantinople lasted as long as the Egyptian empire, but its citizens never felt the same feeling of continuity and stability that the ancient Egyptians felt.</p><p>Istanbul is a pretty clever name for a chipmaker who, like the legendary phoenix, dies and then returns from the ashes.</p></htmltext>
<tokenext>The Eastern Roman Empire based in Constantinople lasted as long as the Egyptian empire , but its citizens never felt the same feeling of continuity and stability that the ancient Egyptians felt.Istanbul is a pretty clever name for a chipmaker who , like the legendary phoenix , dies and then returns from the ashes .</tokentext>
<sentencetext>The Eastern Roman Empire based in Constantinople lasted as long as the Egyptian empire, but its citizens never felt the same feeling of continuity and stability that the ancient Egyptians felt.Istanbul is a pretty clever name for a chipmaker who, like the legendary phoenix, dies and then returns from the ashes.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180795</id>
	<title>Re:dont bullshit</title>
	<author>Anonymous</author>
	<datestamp>1243951320000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>What the heck is the point of your post? Just look at what was accomplished in philosophy and government between 500BC to 1453AD.  The Ancient Egyptians may have been around first but so what?</p></htmltext>
<tokenext>What the heck is the point of your post ?
Just look at what was accomplished in philosophy and government between 500BC to 1453AD .
The Ancient Egyptians may have been around first but so what ?</tokentext>
<sentencetext>What the heck is the point of your post?
Just look at what was accomplished in philosophy and government between 500BC to 1453AD.
The Ancient Egyptians may have been around first but so what?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180707</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180443</id>
	<title>But...</title>
	<author>Anonymous</author>
	<datestamp>1243949280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>That's nothing compared to 14 cores.</htmltext>
<tokenext>That 's nothing compared to 14 cores .</tokentext>
<sentencetext>That's nothing compared to 14 cores.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28183715</id>
	<title>Re:No.</title>
	<author>default luser</author>
	<datestamp>1243962720000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>You've already made this comment before, and I've already responded, so I'll keep it short and sweet.</p><p>If you're using a slow 2.2 GHz Quad core, that's not the fault of the industry, that's the fault of YOU.  I have already made it clear that the <a href="http://www.newegg.com/Product/Product.aspx?Item=N82E16819115036" title="newegg.com">top-end Core 2 Duo chips</a> [newegg.com] would run circles around your P4, but apparently you'd prefer to pretend they don't exist.  As for your dog-slow quad core, that was YOUR purchasing decision.  You can purchase <a href="http://www.newegg.com/Product/Product.aspx?Item=N82E16819103471" title="newegg.com">MUCH FASTER</a> [newegg.com] <a href="http://www.newegg.com/Product/Product.aspx?Item=N82E16819115041" title="newegg.com">quad cores</a> [newegg.com] today for reasonable prices, but apparently you're still suck in the year 2006.</p><p>The reason Core 2 / Quad destroys the P4 despite having a slower clock speed: Core 2 ups the Instructions Per Clock versus the Pentium 4.  The increase is between \%60 and \%100 more IPC.  If you <a href="http://slashdot.org/comments.pl?sid=1241585&amp;cid=28055933" title="slashdot.org">read my previous response to you on the subject</a> [slashdot.org], you'd actually know that, instead of continuing to spout your ignorant bullshit.</p><p>And if you can't find a video codec with multiple core support, you're looking in the wrong place.  Video decode is one of those embarrassingly-easy things to parallelize, and so your "boast" is really just outing you as a lazy bastard who can't take five seconds to search Google.</p></htmltext>
<tokenext>You 've already made this comment before , and I 've already responded , so I 'll keep it short and sweet.If you 're using a slow 2.2 GHz Quad core , that 's not the fault of the industry , that 's the fault of YOU .
I have already made it clear that the top-end Core 2 Duo chips [ newegg.com ] would run circles around your P4 , but apparently you 'd prefer to pretend they do n't exist .
As for your dog-slow quad core , that was YOUR purchasing decision .
You can purchase MUCH FASTER [ newegg.com ] quad cores [ newegg.com ] today for reasonable prices , but apparently you 're still suck in the year 2006.The reason Core 2 / Quad destroys the P4 despite having a slower clock speed : Core 2 ups the Instructions Per Clock versus the Pentium 4 .
The increase is between \ % 60 and \ % 100 more IPC .
If you read my previous response to you on the subject [ slashdot.org ] , you 'd actually know that , instead of continuing to spout your ignorant bullshit.And if you ca n't find a video codec with multiple core support , you 're looking in the wrong place .
Video decode is one of those embarrassingly-easy things to parallelize , and so your " boast " is really just outing you as a lazy bastard who ca n't take five seconds to search Google .</tokentext>
<sentencetext>You've already made this comment before, and I've already responded, so I'll keep it short and sweet.If you're using a slow 2.2 GHz Quad core, that's not the fault of the industry, that's the fault of YOU.
I have already made it clear that the top-end Core 2 Duo chips [newegg.com] would run circles around your P4, but apparently you'd prefer to pretend they don't exist.
As for your dog-slow quad core, that was YOUR purchasing decision.
You can purchase MUCH FASTER [newegg.com] quad cores [newegg.com] today for reasonable prices, but apparently you're still suck in the year 2006.The reason Core 2 / Quad destroys the P4 despite having a slower clock speed: Core 2 ups the Instructions Per Clock versus the Pentium 4.
The increase is between \%60 and \%100 more IPC.
If you read my previous response to you on the subject [slashdot.org], you'd actually know that, instead of continuing to spout your ignorant bullshit.And if you can't find a video codec with multiple core support, you're looking in the wrong place.
Video decode is one of those embarrassingly-easy things to parallelize, and so your "boast" is really just outing you as a lazy bastard who can't take five seconds to search Google.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182123</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28190887</id>
	<title>Re:Another test at anandtech.com</title>
	<author>Zork the Almighty</author>
	<datestamp>1243954980000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>Hyperthreading shows you eight fake cores which map to four real cores.  I benchmarked it extensively.  Computationally intensive routines with a small memory footprint can gain up to 20\%.  Bandwidth or memory intensive routines can lose up to 50\%.  In the extreme case, 8 threads on virtual cores can be half the speed of 4 threads on 4 real cores on a Core i7.  Keep in mind, this is on a crazy application that generates lots of data.
<br> <br>

If your algorithm is designed to break up the problem to exploit the cache then hyperthreading is a bigger mess.  The data for thread 1 and thread 2 (out of 8) might be complementary, but the operating system will run those threads different actual cores, because all it sees is the virtual cores.  This can be very inefficient if you need the whole cache.<br> <br>

Perhaps worst of all, you are stuck always running 8 threads.  2-6 threads may not be distributed evenly across the real cores, leading to inconsistent performance.  Therefore, you may lose performance by attempting to scale the problem further than it is efficient to do so.  With real cores, I can decide (based on problem size) the correct number of core to use.<br> <br>

In conclusion, hyperthreading has its uses, but operating systems are oblivious to it and that's a major problem with more than one core.</htmltext>
<tokenext>Hyperthreading shows you eight fake cores which map to four real cores .
I benchmarked it extensively .
Computationally intensive routines with a small memory footprint can gain up to 20 \ % .
Bandwidth or memory intensive routines can lose up to 50 \ % .
In the extreme case , 8 threads on virtual cores can be half the speed of 4 threads on 4 real cores on a Core i7 .
Keep in mind , this is on a crazy application that generates lots of data .
If your algorithm is designed to break up the problem to exploit the cache then hyperthreading is a bigger mess .
The data for thread 1 and thread 2 ( out of 8 ) might be complementary , but the operating system will run those threads different actual cores , because all it sees is the virtual cores .
This can be very inefficient if you need the whole cache .
Perhaps worst of all , you are stuck always running 8 threads .
2-6 threads may not be distributed evenly across the real cores , leading to inconsistent performance .
Therefore , you may lose performance by attempting to scale the problem further than it is efficient to do so .
With real cores , I can decide ( based on problem size ) the correct number of core to use .
In conclusion , hyperthreading has its uses , but operating systems are oblivious to it and that 's a major problem with more than one core .</tokentext>
<sentencetext>Hyperthreading shows you eight fake cores which map to four real cores.
I benchmarked it extensively.
Computationally intensive routines with a small memory footprint can gain up to 20\%.
Bandwidth or memory intensive routines can lose up to 50\%.
In the extreme case, 8 threads on virtual cores can be half the speed of 4 threads on 4 real cores on a Core i7.
Keep in mind, this is on a crazy application that generates lots of data.
If your algorithm is designed to break up the problem to exploit the cache then hyperthreading is a bigger mess.
The data for thread 1 and thread 2 (out of 8) might be complementary, but the operating system will run those threads different actual cores, because all it sees is the virtual cores.
This can be very inefficient if you need the whole cache.
Perhaps worst of all, you are stuck always running 8 threads.
2-6 threads may not be distributed evenly across the real cores, leading to inconsistent performance.
Therefore, you may lose performance by attempting to scale the problem further than it is efficient to do so.
With real cores, I can decide (based on problem size) the correct number of core to use.
In conclusion, hyperthreading has its uses, but operating systems are oblivious to it and that's a major problem with more than one core.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28187615</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180457</id>
	<title>EPT?</title>
	<author>reset\_button</author>
	<datestamp>1243949340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Does the Istanbul have Extended Page Table support like Nehalem does?  This is supposed to give a big performance boost to virtual machines, though I haven't seen any hard numbers.  Any info?</htmltext>
<tokenext>Does the Istanbul have Extended Page Table support like Nehalem does ?
This is supposed to give a big performance boost to virtual machines , though I have n't seen any hard numbers .
Any info ?</tokentext>
<sentencetext>Does the Istanbul have Extended Page Table support like Nehalem does?
This is supposed to give a big performance boost to virtual machines, though I haven't seen any hard numbers.
Any info?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182865</id>
	<title>it's interesting, but not becase of 6c</title>
	<author>markhahn</author>
	<datestamp>1243959540000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>the real news here is not the extra couple cores, but coherency snooping.  this feature will make 4/8s machines far more attractive; it doesn't hurt that with 48 cores and 32 ddr3/1333 dimms, you have quite a monster.  \_and\_ incidentally something that Intel can't currently answer.</p><p>there's no question that nehalem has put a serious dent in the market, but Intel's going quite slow in rolling out higher-end products.  yes, a nehalem socket delivers about 50\% more bandwidth than a current opteron socket, but show me the 8s nehalem machines.  nehalem-ex is coming, but how soon and at what price?</p><p>one thing I haven't seen is any attempt to measure real SMP performance on new-gen chips.  I don't mean something like Stream or VMs, where there is no real sharing inherent to the workload.  how long does it take to exchange a \_contended\_ lock between cores (in the same socket vs remote)?</p><p>finally, the real question is whether there is actual demand for more-core chips.  I'm in HPC, and we always want more, and throw good money.  but it has to be smart more - the 6-core core2, for instance, was just asinine because even 2c core2 is drastically memory-bandwidth-starved.  nehalem-ex seems quite promising, but if it's cheaper to cluster dual-socket machines rather than pay the premium for 4s's, the 4s market will be stunted and less successful in a self-fulfilling way...</p></htmltext>
<tokenext>the real news here is not the extra couple cores , but coherency snooping .
this feature will make 4/8s machines far more attractive ; it does n't hurt that with 48 cores and 32 ddr3/1333 dimms , you have quite a monster .
\ _and \ _ incidentally something that Intel ca n't currently answer.there 's no question that nehalem has put a serious dent in the market , but Intel 's going quite slow in rolling out higher-end products .
yes , a nehalem socket delivers about 50 \ % more bandwidth than a current opteron socket , but show me the 8s nehalem machines .
nehalem-ex is coming , but how soon and at what price ? one thing I have n't seen is any attempt to measure real SMP performance on new-gen chips .
I do n't mean something like Stream or VMs , where there is no real sharing inherent to the workload .
how long does it take to exchange a \ _contended \ _ lock between cores ( in the same socket vs remote ) ? finally , the real question is whether there is actual demand for more-core chips .
I 'm in HPC , and we always want more , and throw good money .
but it has to be smart more - the 6-core core2 , for instance , was just asinine because even 2c core2 is drastically memory-bandwidth-starved .
nehalem-ex seems quite promising , but if it 's cheaper to cluster dual-socket machines rather than pay the premium for 4s 's , the 4s market will be stunted and less successful in a self-fulfilling way.. .</tokentext>
<sentencetext>the real news here is not the extra couple cores, but coherency snooping.
this feature will make 4/8s machines far more attractive; it doesn't hurt that with 48 cores and 32 ddr3/1333 dimms, you have quite a monster.
\_and\_ incidentally something that Intel can't currently answer.there's no question that nehalem has put a serious dent in the market, but Intel's going quite slow in rolling out higher-end products.
yes, a nehalem socket delivers about 50\% more bandwidth than a current opteron socket, but show me the 8s nehalem machines.
nehalem-ex is coming, but how soon and at what price?one thing I haven't seen is any attempt to measure real SMP performance on new-gen chips.
I don't mean something like Stream or VMs, where there is no real sharing inherent to the workload.
how long does it take to exchange a \_contended\_ lock between cores (in the same socket vs remote)?finally, the real question is whether there is actual demand for more-core chips.
I'm in HPC, and we always want more, and throw good money.
but it has to be smart more - the 6-core core2, for instance, was just asinine because even 2c core2 is drastically memory-bandwidth-starved.
nehalem-ex seems quite promising, but if it's cheaper to cluster dual-socket machines rather than pay the premium for 4s's, the 4s market will be stunted and less successful in a self-fulfilling way...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182609</id>
	<title>Speaking for Intel, spokesperson Nigel Tufnel said</title>
	<author>Phizzle</author>
	<datestamp>1243958760000</datestamp>
	<modclass>Redundant</modclass>
	<modscore>1</modscore>
	<htmltext><b>Intels next processor will go to Eleven!<br>
When asked by the reporters, as to why Eleven was chosen as the target number of cores, Nigel said <br>
It's six louder than AMD! I mean faster...</b></htmltext>
<tokenext>Intels next processor will go to Eleven !
When asked by the reporters , as to why Eleven was chosen as the target number of cores , Nigel said It 's six louder than AMD !
I mean faster.. .</tokentext>
<sentencetext>Intels next processor will go to Eleven!
When asked by the reporters, as to why Eleven was chosen as the target number of cores, Nigel said 
It's six louder than AMD!
I mean faster...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181787</id>
	<title>'so what' is that :</title>
	<author>unity100</author>
	<datestamp>1243956060000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>ancient egypt is THE source of many of your philosophies and sciences. from 1000 BC and onwards, early greeks were coming to egypt for education. egypt had 2 schools - school of life, and school of death. school of life was teaching stuff related to this world, ie, medicine, land registry, writing, government, and school of death taught stuff pertaining to abstract world. not to mention that many of the professions people identify themselves today originated in egypt.</p><p>even before knossos was known, medicine men and wise men of egypt were world renowned, even legendary in their time. a LOT of stuff that is ascribed to greeks were what greeks learned in egypt.</p><p>brush up on your history.</p></htmltext>
<tokenext>ancient egypt is THE source of many of your philosophies and sciences .
from 1000 BC and onwards , early greeks were coming to egypt for education .
egypt had 2 schools - school of life , and school of death .
school of life was teaching stuff related to this world , ie , medicine , land registry , writing , government , and school of death taught stuff pertaining to abstract world .
not to mention that many of the professions people identify themselves today originated in egypt.even before knossos was known , medicine men and wise men of egypt were world renowned , even legendary in their time .
a LOT of stuff that is ascribed to greeks were what greeks learned in egypt.brush up on your history .</tokentext>
<sentencetext>ancient egypt is THE source of many of your philosophies and sciences.
from 1000 BC and onwards, early greeks were coming to egypt for education.
egypt had 2 schools - school of life, and school of death.
school of life was teaching stuff related to this world, ie, medicine, land registry, writing, government, and school of death taught stuff pertaining to abstract world.
not to mention that many of the professions people identify themselves today originated in egypt.even before knossos was known, medicine men and wise men of egypt were world renowned, even legendary in their time.
a LOT of stuff that is ascribed to greeks were what greeks learned in egypt.brush up on your history.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180795</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28185553</id>
	<title>Re:No.</title>
	<author>Anonymous</author>
	<datestamp>1243970640000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>So whoever wrote your h.246 decoder doesn't know how to parallelise the decoding, or doesn't care.</htmltext>
<tokenext>So whoever wrote your h.246 decoder does n't know how to parallelise the decoding , or does n't care .</tokentext>
<sentencetext>So whoever wrote your h.246 decoder doesn't know how to parallelise the decoding, or doesn't care.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182123</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180407</id>
	<title>I won't be impressed until it is...</title>
	<author>Anonymous</author>
	<datestamp>1243948980000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>Over 9000!!!!1</p></htmltext>
<tokenext>Over 9000 ! ! !
! 1</tokentext>
<sentencetext>Over 9000!!!
!1</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180803</id>
	<title>i'm pretty sure it'll be able to run crysis, but..</title>
	<author>underqualified</author>
	<datestamp>1243951380000</datestamp>
	<modclass>Redundant</modclass>
	<modscore>-1</modscore>
	<htmltext>can it run crysis 2?</htmltext>
<tokenext>can it run crysis 2 ?</tokentext>
<sentencetext>can it run crysis 2?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28183883</id>
	<title>computers are more confusing than ever</title>
	<author>jollyreaper</author>
	<datestamp>1243963560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Tried pricing up a decent box for some heavy-lifting, there's just so much complexity out there! It's hard to figure out where the bleeding edge is and where the most effective bang for the buck zone is behind all the blood. 286, 386, 486, a man used to be able to tell where computers sat! And then all that Pentium bullshit started. I don't know what the fuck I'm looking at. I'm crossing my fingers and going with a Tom's Hardware recommended build list.</p></htmltext>
<tokenext>Tried pricing up a decent box for some heavy-lifting , there 's just so much complexity out there !
It 's hard to figure out where the bleeding edge is and where the most effective bang for the buck zone is behind all the blood .
286 , 386 , 486 , a man used to be able to tell where computers sat !
And then all that Pentium bullshit started .
I do n't know what the fuck I 'm looking at .
I 'm crossing my fingers and going with a Tom 's Hardware recommended build list .</tokentext>
<sentencetext>Tried pricing up a decent box for some heavy-lifting, there's just so much complexity out there!
It's hard to figure out where the bleeding edge is and where the most effective bang for the buck zone is behind all the blood.
286, 386, 486, a man used to be able to tell where computers sat!
And then all that Pentium bullshit started.
I don't know what the fuck I'm looking at.
I'm crossing my fingers and going with a Tom's Hardware recommended build list.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28185937</id>
	<title>Re:Scary Quote from the Article</title>
	<author>JF-AMD</author>
	<datestamp>1243972320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Actually, when we laid out the project, we planned for a major spin and some minor tweaks before we would have production silicon.

The first silicon came out strong enough that our partners said "let's take it to market."

No corners were cut. When you start with the solid Shanghai silicon it makes it a lot easier.</htmltext>
<tokenext>Actually , when we laid out the project , we planned for a major spin and some minor tweaks before we would have production silicon .
The first silicon came out strong enough that our partners said " let 's take it to market .
" No corners were cut .
When you start with the solid Shanghai silicon it makes it a lot easier .</tokentext>
<sentencetext>Actually, when we laid out the project, we planned for a major spin and some minor tweaks before we would have production silicon.
The first silicon came out strong enough that our partners said "let's take it to market.
"

No corners were cut.
When you start with the solid Shanghai silicon it makes it a lot easier.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181193</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28187495</id>
	<title>Re:No.</title>
	<author>Anonymous</author>
	<datestamp>1243935780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>top-end Core 2 Duo chips would run circles around your P4, but apparently you'd prefer to pretend they don't exist.</p></div></blockquote><p>He is making a point about the diminishing value of multiple cores.  The fact that the industry has (finally) gotten back up to the point where a single core is clocked faster than a 10 year-old chip doesn't negate the point at all...</p><blockquote><div><p>And if you can't find a video codec with multiple core support, you're looking in the wrong place. Video decode is one of those embarrassingly-easy things to parallelize,</p></div></blockquote><p>EXCEPT for H.264, where decoding is highly serialized.  The only way to thread it to a significant extent is by skipping some of those steps, and sacrificing quality...  That's what the likes of CoreAVC do, hoping nobody notices.  That's why Blu-Ray video is divided up into quadrants.</p><p>And since threading doesn't work for H.264, why bother?  Any other codec is efficient enough that even high bitrate, highdef video can be decoded with a single core on much older machines.</p></div>
	</htmltext>
<tokenext>top-end Core 2 Duo chips would run circles around your P4 , but apparently you 'd prefer to pretend they do n't exist.He is making a point about the diminishing value of multiple cores .
The fact that the industry has ( finally ) gotten back up to the point where a single core is clocked faster than a 10 year-old chip does n't negate the point at all...And if you ca n't find a video codec with multiple core support , you 're looking in the wrong place .
Video decode is one of those embarrassingly-easy things to parallelize,EXCEPT for H.264 , where decoding is highly serialized .
The only way to thread it to a significant extent is by skipping some of those steps , and sacrificing quality... That 's what the likes of CoreAVC do , hoping nobody notices .
That 's why Blu-Ray video is divided up into quadrants.And since threading does n't work for H.264 , why bother ?
Any other codec is efficient enough that even high bitrate , highdef video can be decoded with a single core on much older machines .</tokentext>
<sentencetext>top-end Core 2 Duo chips would run circles around your P4, but apparently you'd prefer to pretend they don't exist.He is making a point about the diminishing value of multiple cores.
The fact that the industry has (finally) gotten back up to the point where a single core is clocked faster than a 10 year-old chip doesn't negate the point at all...And if you can't find a video codec with multiple core support, you're looking in the wrong place.
Video decode is one of those embarrassingly-easy things to parallelize,EXCEPT for H.264, where decoding is highly serialized.
The only way to thread it to a significant extent is by skipping some of those steps, and sacrificing quality...  That's what the likes of CoreAVC do, hoping nobody notices.
That's why Blu-Ray video is divided up into quadrants.And since threading doesn't work for H.264, why bother?
Any other codec is efficient enough that even high bitrate, highdef video can be decoded with a single core on much older machines.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28183715</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180383</id>
	<title>Just a little FYI for you guys...</title>
	<author>Anonymous</author>
	<datestamp>1243948740000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext>I just crushed up a d20 and smoked it. I'll post in a little bit about where that takes me.</htmltext>
<tokenext>I just crushed up a d20 and smoked it .
I 'll post in a little bit about where that takes me .</tokentext>
<sentencetext>I just crushed up a d20 and smoked it.
I'll post in a little bit about where that takes me.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180673</id>
	<title>Finally</title>
	<author>rodrigoandrade</author>
	<datestamp>1243950420000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext>I'll be finally able to run Crysis at a decent framerate.</htmltext>
<tokenext>I 'll be finally able to run Crysis at a decent framerate .</tokentext>
<sentencetext>I'll be finally able to run Crysis at a decent framerate.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28186423</id>
	<title>video?</title>
	<author>sneakyimp</author>
	<datestamp>1243974540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Anyone have any clue how Nehalem and these multicore AMD beasts would compare for video editing or render farm applications?</p></htmltext>
<tokenext>Anyone have any clue how Nehalem and these multicore AMD beasts would compare for video editing or render farm applications ?</tokentext>
<sentencetext>Anyone have any clue how Nehalem and these multicore AMD beasts would compare for video editing or render farm applications?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181217</id>
	<title>Yeah, but...</title>
	<author>evil\_aar0n</author>
	<datestamp>1243953660000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext><p>Yeah, but will it run a hackintosh?</p></htmltext>
<tokenext>Yeah , but will it run a hackintosh ?</tokentext>
<sentencetext>Yeah, but will it run a hackintosh?</sentencetext>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0051235_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28190887
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28187615
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181029
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0051235_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28185087
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182123
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180723
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180547
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0051235_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28184507
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182865
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0051235_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28185553
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182123
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180723
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180547
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0051235_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28187495
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28183715
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182123
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180723
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180547
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0051235_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28187771
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182123
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180723
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180547
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0051235_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180763
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180389
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0051235_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182177
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180443
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0051235_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28183875
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180395
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0051235_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28184185
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182123
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180723
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180547
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0051235_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28184101
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181193
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0051235_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28189727
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180389
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0051235_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28191281
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28183715
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182123
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180723
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180547
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0051235_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181759
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180349
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0051235_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181787
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180795
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180707
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180395
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0051235_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28194065
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180349
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0051235_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181053
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180547
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0051235_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180891
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180477
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0051235_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180823
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180477
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0051235_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28187467
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181193
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0051235_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28191459
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181029
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0051235_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182991
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181193
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0051235_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28184829
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180477
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0051235_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28185937
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181193
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0051235_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28184107
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182123
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180723
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180547
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0051235_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28200651
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28184593
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182865
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0051235.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180389
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180763
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28189727
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0051235.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180457
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0051235.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180547
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180723
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182123
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28185553
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28184107
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28185087
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28187771
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28183715
----http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28191281
----http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28187495
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28184185
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181053
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0051235.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180395
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28183875
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180707
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180795
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181787
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0051235.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181217
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0051235.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181193
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28185937
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28184101
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182991
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28187467
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0051235.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180443
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182177
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0051235.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180477
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28184829
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180891
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180823
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0051235.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180673
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0051235.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182865
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28184593
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28200651
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28184507
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0051235.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180349
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28194065
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181759
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0051235.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28180383
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0051235.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28182233
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0051235.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28181029
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28191459
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28187615
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0051235.28190887
</commentlist>
</conversation>
