<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_12_02_215207</id>
	<title>Intel Shows 48-Core x86 Processor</title>
	<author>timothy</author>
	<datestamp>1259745960000</datestamp>
	<htmltext>Vigile writes <i>"Intel unveiled a <a href="http://www.pcper.com/article.php?aid=825">completely new processor design</a> today the company is dubbing the 'Single-chip Cloud Computer' (but was previously codenamed Bangalore).  Justin Rattner, the company's CTO, discussed the new product at a press event in Santa Clara and revealed some interesting information about the goals and design of the new CPU. While terascale processing has been <a href="//tech.slashdot.org/story/06/09/27/1959245/Intels-Terascale-Vision">discussed for some time</a>, this new CPU is the first to integrate full IA x86 cores rather than simple floating point units.  The 48 cores are set 2 to a 'tile' and each tile communicates with others via a 2D mesh network capable of 256 GB/s rather than a large cache structure. "</i></htmltext>
<tokenext>Vigile writes " Intel unveiled a completely new processor design today the company is dubbing the 'Single-chip Cloud Computer ' ( but was previously codenamed Bangalore ) .
Justin Rattner , the company 's CTO , discussed the new product at a press event in Santa Clara and revealed some interesting information about the goals and design of the new CPU .
While terascale processing has been discussed for some time , this new CPU is the first to integrate full IA x86 cores rather than simple floating point units .
The 48 cores are set 2 to a 'tile ' and each tile communicates with others via a 2D mesh network capable of 256 GB/s rather than a large cache structure .
"</tokentext>
<sentencetext>Vigile writes "Intel unveiled a completely new processor design today the company is dubbing the 'Single-chip Cloud Computer' (but was previously codenamed Bangalore).
Justin Rattner, the company's CTO, discussed the new product at a press event in Santa Clara and revealed some interesting information about the goals and design of the new CPU.
While terascale processing has been discussed for some time, this new CPU is the first to integrate full IA x86 cores rather than simple floating point units.
The 48 cores are set 2 to a 'tile' and each tile communicates with others via a 2D mesh network capable of 256 GB/s rather than a large cache structure.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308914</id>
	<title>Re:48 is sufficient for most Ph.D. dissertations.</title>
	<author>Anonymous</author>
	<datestamp>1259841540000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>called Meiko</p></div></blockquote><p>
Meiko was the company who built the machine.<br>
<br>
INMOS begat the Transputer, which begat Meiko and the CS-2, which begat the Elan/Elite interconnect, which begat Quadrics, which begat QsNet &amp; QsNet II (&amp; almost, but not quite, QsNet III), which begat a whole bunch of redundent people in spring 2009 when they finally folded.</p></div>
	</htmltext>
<tokenext>called Meiko Meiko was the company who built the machine .
INMOS begat the Transputer , which begat Meiko and the CS-2 , which begat the Elan/Elite interconnect , which begat Quadrics , which begat QsNet &amp; QsNet II ( &amp; almost , but not quite , QsNet III ) , which begat a whole bunch of redundent people in spring 2009 when they finally folded .</tokentext>
<sentencetext>called Meiko
Meiko was the company who built the machine.
INMOS begat the Transputer, which begat Meiko and the CS-2, which begat the Elan/Elite interconnect, which begat Quadrics, which begat QsNet &amp; QsNet II (&amp; almost, but not quite, QsNet III), which begat a whole bunch of redundent people in spring 2009 when they finally folded.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306242</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303564</id>
	<title>Helmer</title>
	<author>dandart</author>
	<datestamp>1259578800000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Zomg. Twice as fast as <a href="http://helmer.sfe.se/" title="helmer.sfe.se" rel="nofollow">Helmer</a> [helmer.sfe.se] and probably twice as expensive.</htmltext>
<tokenext>Zomg .
Twice as fast as Helmer [ helmer.sfe.se ] and probably twice as expensive .</tokentext>
<sentencetext>Zomg.
Twice as fast as Helmer [helmer.sfe.se] and probably twice as expensive.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306776</id>
	<title>Re:And yet it's still...</title>
	<author>Anonymous</author>
	<datestamp>1259595000000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>No, mentioning a bad architecture, like Itanium, is not going to put a dent into that argument. ^^</p> </div><p>Similarly, just repeatedly calling the x86 architecture "crappy" does not actually make it "crappy."  Despite having compatible instruction sets, the architecture behind Intel's modern processors has practically nothing else in common with legacy chips like the 386 or 486.  Why don't you actually tell us what makes this 48-core processor so much worse than anything else out there?</p><p>Or would that require you to stop being a jackass?</p></div>
	</htmltext>
<tokenext>No , mentioning a bad architecture , like Itanium , is not going to put a dent into that argument .
^ ^ Similarly , just repeatedly calling the x86 architecture " crappy " does not actually make it " crappy .
" Despite having compatible instruction sets , the architecture behind Intel 's modern processors has practically nothing else in common with legacy chips like the 386 or 486 .
Why do n't you actually tell us what makes this 48-core processor so much worse than anything else out there ? Or would that require you to stop being a jackass ?</tokentext>
<sentencetext>No, mentioning a bad architecture, like Itanium, is not going to put a dent into that argument.
^^ Similarly, just repeatedly calling the x86 architecture "crappy" does not actually make it "crappy.
"  Despite having compatible instruction sets, the architecture behind Intel's modern processors has practically nothing else in common with legacy chips like the 386 or 486.
Why don't you actually tell us what makes this 48-core processor so much worse than anything else out there?Or would that require you to stop being a jackass?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305540</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30315058</id>
	<title>It still isn't reversible computing!</title>
	<author>bradbury</author>
	<datestamp>1259873640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If one notes from the articles on the architecture Intel is *still* not biting the bullet of reversible computing [1].  There has to be a fair amount of the architecture built into the frequency and voltage management of the chips (not to discuss chip layer layers involving voltage management (I would like to know whether they are doing all the voltage management on-chip or require new power supplies and/or motherboards (meaning one is unlikely to see plug-in replacements of CPUs on desktop/laptop PCs.  One could adopt a conspiracy perspective and argue that this is Intel's attempt to redefine the "standard" computing platform and forcing all "modern" users to purchase new computers! (Wouldn't that sell hundreds of millions or billions of chips???)</p><p>1. For those unfamiliar with "reversible computing" it evolved from the work of H. Bremermann, R. Landauer and C. Bennett, largely at IBM in the 1960's and 1970's and pointed out that one could not "destroy" bits without generating heat (Laws of entropy).  As a result the only way to do computing without generating wasteful energy consumption (in the form of heat radiated and therefore bumping into the limits of heat dissipation per chip) would be to perform computations reversibly.  I.e. you never destroy mass/energy/charge during a computation -- you simply return it to its original state.  That is "reversible computatuion".  Unfortunately manufacturers like Intel and AMD have not chosen to pursue this aggressively (one would have to believe that there may be some financial motivation behind this).  I would tend to view this as pushing existing designs, technologies, instruction sets and limits to their farthest bounds before executing a shift to reversible computing.  It may be observed that Eric Drexler, in Nanosystems, Chapter 12, "Nanomechanical Computational Systems" (published in 1992) explained the operation of an atomic scale mechanical gate array that did function as a reversible computational architecture, very much like a "reversible" abacus, because the energy required to reset the calculations was significantly less than that required to erase the matter/energy contained in them.</p><p>So the information is out there -- and the question remains when will manufacturers bite the bullet and transform the entire framework into a reversible one?  Now in general one doesn't want to accept the delays of reversing the computation when a simple CLR will do.</p></htmltext>
<tokenext>If one notes from the articles on the architecture Intel is * still * not biting the bullet of reversible computing [ 1 ] .
There has to be a fair amount of the architecture built into the frequency and voltage management of the chips ( not to discuss chip layer layers involving voltage management ( I would like to know whether they are doing all the voltage management on-chip or require new power supplies and/or motherboards ( meaning one is unlikely to see plug-in replacements of CPUs on desktop/laptop PCs .
One could adopt a conspiracy perspective and argue that this is Intel 's attempt to redefine the " standard " computing platform and forcing all " modern " users to purchase new computers !
( Would n't that sell hundreds of millions or billions of chips ? ? ? ) 1 .
For those unfamiliar with " reversible computing " it evolved from the work of H. Bremermann , R. Landauer and C. Bennett , largely at IBM in the 1960 's and 1970 's and pointed out that one could not " destroy " bits without generating heat ( Laws of entropy ) .
As a result the only way to do computing without generating wasteful energy consumption ( in the form of heat radiated and therefore bumping into the limits of heat dissipation per chip ) would be to perform computations reversibly .
I.e. you never destroy mass/energy/charge during a computation -- you simply return it to its original state .
That is " reversible computatuion " .
Unfortunately manufacturers like Intel and AMD have not chosen to pursue this aggressively ( one would have to believe that there may be some financial motivation behind this ) .
I would tend to view this as pushing existing designs , technologies , instruction sets and limits to their farthest bounds before executing a shift to reversible computing .
It may be observed that Eric Drexler , in Nanosystems , Chapter 12 , " Nanomechanical Computational Systems " ( published in 1992 ) explained the operation of an atomic scale mechanical gate array that did function as a reversible computational architecture , very much like a " reversible " abacus , because the energy required to reset the calculations was significantly less than that required to erase the matter/energy contained in them.So the information is out there -- and the question remains when will manufacturers bite the bullet and transform the entire framework into a reversible one ?
Now in general one does n't want to accept the delays of reversing the computation when a simple CLR will do .</tokentext>
<sentencetext>If one notes from the articles on the architecture Intel is *still* not biting the bullet of reversible computing [1].
There has to be a fair amount of the architecture built into the frequency and voltage management of the chips (not to discuss chip layer layers involving voltage management (I would like to know whether they are doing all the voltage management on-chip or require new power supplies and/or motherboards (meaning one is unlikely to see plug-in replacements of CPUs on desktop/laptop PCs.
One could adopt a conspiracy perspective and argue that this is Intel's attempt to redefine the "standard" computing platform and forcing all "modern" users to purchase new computers!
(Wouldn't that sell hundreds of millions or billions of chips???)1.
For those unfamiliar with "reversible computing" it evolved from the work of H. Bremermann, R. Landauer and C. Bennett, largely at IBM in the 1960's and 1970's and pointed out that one could not "destroy" bits without generating heat (Laws of entropy).
As a result the only way to do computing without generating wasteful energy consumption (in the form of heat radiated and therefore bumping into the limits of heat dissipation per chip) would be to perform computations reversibly.
I.e. you never destroy mass/energy/charge during a computation -- you simply return it to its original state.
That is "reversible computatuion".
Unfortunately manufacturers like Intel and AMD have not chosen to pursue this aggressively (one would have to believe that there may be some financial motivation behind this).
I would tend to view this as pushing existing designs, technologies, instruction sets and limits to their farthest bounds before executing a shift to reversible computing.
It may be observed that Eric Drexler, in Nanosystems, Chapter 12, "Nanomechanical Computational Systems" (published in 1992) explained the operation of an atomic scale mechanical gate array that did function as a reversible computational architecture, very much like a "reversible" abacus, because the energy required to reset the calculations was significantly less than that required to erase the matter/energy contained in them.So the information is out there -- and the question remains when will manufacturers bite the bullet and transform the entire framework into a reversible one?
Now in general one doesn't want to accept the delays of reversing the computation when a simple CLR will do.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306238</id>
	<title>Re:Meh. I'm holding out for a kilocore.</title>
	<author>Anonymous</author>
	<datestamp>1259590560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>It took you some release cycles too long to be original: <a href="http://hardware.slashdot.org/comments.pl?sid=498034&amp;cid=22852170" title="slashdot.org" rel="nofollow">MegaCore</a> [slashdot.org].</htmltext>
<tokenext>It took you some release cycles too long to be original : MegaCore [ slashdot.org ] .</tokentext>
<sentencetext>It took you some release cycles too long to be original: MegaCore [slashdot.org].</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302910</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302962</id>
	<title>Windows 12</title>
	<author>Anonymous</author>
	<datestamp>1259577060000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Windows 12 will require a minimum 48-core 1.4 THz processor and 8TB of RAM. Microsoft are already planning for this one.</htmltext>
<tokenext>Windows 12 will require a minimum 48-core 1.4 THz processor and 8TB of RAM .
Microsoft are already planning for this one .</tokentext>
<sentencetext>Windows 12 will require a minimum 48-core 1.4 THz processor and 8TB of RAM.
Microsoft are already planning for this one.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303160</id>
	<title>Re:Code Name is Offensive</title>
	<author>girlintraining</author>
	<datestamp>1259577660000</datestamp>
	<modclass>Troll</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>Intel an American company, with the American economy in the shape it's in, I am offended at the codename Bangalore.</p></div><p>First, I agree with you completely. That said, if the processor core is anything like the city it's named for...a 48 core processor on a mesh topology is a good digital analogy to Bangalore -- it is the 3rd most populous urban area in India. It'll also smell horrible, the electrons will be subject to depraved working conditions, and they'll be paid crap for their work, etc. Despite it being a so-called "economic powerhouse", only about 60k of its inhabitants have more than US $1 million net worth. It has over 5.8 million people living there. It makes the wage gap in this country look postively equalitarian.</p></div>
	</htmltext>
<tokenext>Intel an American company , with the American economy in the shape it 's in , I am offended at the codename Bangalore.First , I agree with you completely .
That said , if the processor core is anything like the city it 's named for...a 48 core processor on a mesh topology is a good digital analogy to Bangalore -- it is the 3rd most populous urban area in India .
It 'll also smell horrible , the electrons will be subject to depraved working conditions , and they 'll be paid crap for their work , etc .
Despite it being a so-called " economic powerhouse " , only about 60k of its inhabitants have more than US $ 1 million net worth .
It has over 5.8 million people living there .
It makes the wage gap in this country look postively equalitarian .</tokentext>
<sentencetext>Intel an American company, with the American economy in the shape it's in, I am offended at the codename Bangalore.First, I agree with you completely.
That said, if the processor core is anything like the city it's named for...a 48 core processor on a mesh topology is a good digital analogy to Bangalore -- it is the 3rd most populous urban area in India.
It'll also smell horrible, the electrons will be subject to depraved working conditions, and they'll be paid crap for their work, etc.
Despite it being a so-called "economic powerhouse", only about 60k of its inhabitants have more than US $1 million net worth.
It has over 5.8 million people living there.
It makes the wage gap in this country look postively equalitarian.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307610</id>
	<title>Re:Code Name is Offensive</title>
	<author>Anonymous</author>
	<datestamp>1259603940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The cougar, a type of panther, is found in the wild in western north america.</p></htmltext>
<tokenext>The cougar , a type of panther , is found in the wild in western north america .</tokentext>
<sentencetext>The cougar, a type of panther, is found in the wild in western north america.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303212</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303978</id>
	<title>Just in time...</title>
	<author>cmeans</author>
	<datestamp>1259580180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>for Windows 8!</htmltext>
<tokenext>for Windows 8 !</tokentext>
<sentencetext>for Windows 8!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30313110</id>
	<title>Re:Advantages over just adding more FPUs?</title>
	<author>Locke2005</author>
	<datestamp>1259866020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Embarassingly parallel problems are embarassingly easy to solve (e.g. Beowulf clusters of cheap PCs). What is really needed is a technique for optimizing highly interdependent parallel processes. I believe these processes are currently frequently I/O bound. meaning the memory controllers and cache schemes still need to catch up with the CPU speed.<br> <br>Enforcing cache coherence over 48 CPUs is a non-trivial problem; you run the risk of several processors constantly invalidating each other's cache. dropping your performance to the same as having no cache at all. Ultimately the hardware and software designers need to work hand in hand to acheive optimum performance.</htmltext>
<tokenext>Embarassingly parallel problems are embarassingly easy to solve ( e.g .
Beowulf clusters of cheap PCs ) .
What is really needed is a technique for optimizing highly interdependent parallel processes .
I believe these processes are currently frequently I/O bound .
meaning the memory controllers and cache schemes still need to catch up with the CPU speed .
Enforcing cache coherence over 48 CPUs is a non-trivial problem ; you run the risk of several processors constantly invalidating each other 's cache .
dropping your performance to the same as having no cache at all .
Ultimately the hardware and software designers need to work hand in hand to acheive optimum performance .</tokentext>
<sentencetext>Embarassingly parallel problems are embarassingly easy to solve (e.g.
Beowulf clusters of cheap PCs).
What is really needed is a technique for optimizing highly interdependent parallel processes.
I believe these processes are currently frequently I/O bound.
meaning the memory controllers and cache schemes still need to catch up with the CPU speed.
Enforcing cache coherence over 48 CPUs is a non-trivial problem; you run the risk of several processors constantly invalidating each other's cache.
dropping your performance to the same as having no cache at all.
Ultimately the hardware and software designers need to work hand in hand to acheive optimum performance.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306254</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303356</id>
	<title>Synergy!</title>
	<author>HRbnjR</author>
	<datestamp>1259578140000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext><p>This new Cloud processor should create synergies with my SOA Portal system and allow me to deploy Enterprise B2B Push based Web 2.0 technologies!</p></htmltext>
<tokenext>This new Cloud processor should create synergies with my SOA Portal system and allow me to deploy Enterprise B2B Push based Web 2.0 technologies !</tokentext>
<sentencetext>This new Cloud processor should create synergies with my SOA Portal system and allow me to deploy Enterprise B2B Push based Web 2.0 technologies!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307056</id>
	<title>Re:Advantages over just adding more FPUs?</title>
	<author>Anonymous</author>
	<datestamp>1259597940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Mod this guy up, heavily.  It's a shame this point will go over so many people's heads.  The article glossed over it as well.</p><p>I read that part and I couldn't say I was surprised (cache coherence is what screws up parallel performance, and all the most scalable architectures for this have relaxed memory models or otherwise assume data sharing not to happen).  However, this is going to be tough to adopt for mainstream use.  Threads are pretty much the dominant mainstream programming model and it's going to be impractically annoying to port stuff to it.</p></htmltext>
<tokenext>Mod this guy up , heavily .
It 's a shame this point will go over so many people 's heads .
The article glossed over it as well.I read that part and I could n't say I was surprised ( cache coherence is what screws up parallel performance , and all the most scalable architectures for this have relaxed memory models or otherwise assume data sharing not to happen ) .
However , this is going to be tough to adopt for mainstream use .
Threads are pretty much the dominant mainstream programming model and it 's going to be impractically annoying to port stuff to it .</tokentext>
<sentencetext>Mod this guy up, heavily.
It's a shame this point will go over so many people's heads.
The article glossed over it as well.I read that part and I couldn't say I was surprised (cache coherence is what screws up parallel performance, and all the most scalable architectures for this have relaxed memory models or otherwise assume data sharing not to happen).
However, this is going to be tough to adopt for mainstream use.
Threads are pretty much the dominant mainstream programming model and it's going to be impractically annoying to port stuff to it.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306254</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303618</id>
	<title>Just imagine...</title>
	<author>Anonymous</author>
	<datestamp>1259579040000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>A beowulf cluster of those!</p><p>(yes, yes, I'm old, but old memes are sticky)</p></htmltext>
<tokenext>A beowulf cluster of those !
( yes , yes , I 'm old , but old memes are sticky )</tokentext>
<sentencetext>A beowulf cluster of those!
(yes, yes, I'm old, but old memes are sticky)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305950</id>
	<title>Re:Advantages over just adding more FPUs?</title>
	<author>Anonymous</author>
	<datestamp>1259588700000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>For a server.
Probably not running windows, as linux and other *n.x type OSes support monstrous amounts of CPUs already.</p></div><p>Yet still can't run Flash smoothly.<nobr> <wbr></nobr>:(</p></div>
	</htmltext>
<tokenext>For a server .
Probably not running windows , as linux and other * n.x type OSes support monstrous amounts of CPUs already.Yet still ca n't run Flash smoothly .
: (</tokentext>
<sentencetext>For a server.
Probably not running windows, as linux and other *n.x type OSes support monstrous amounts of CPUs already.Yet still can't run Flash smoothly.
:(
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303138</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304756</id>
	<title>Re:Sun HAS a 64 thread processor: UltraSPARC T2</title>
	<author>raftpeople</author>
	<datestamp>1259582940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>64 threads processed in semi-round-robin fashion is not the same as 48 cores.  Different strengths and weaknesses.</htmltext>
<tokenext>64 threads processed in semi-round-robin fashion is not the same as 48 cores .
Different strengths and weaknesses .</tokentext>
<sentencetext>64 threads processed in semi-round-robin fashion is not the same as 48 cores.
Different strengths and weaknesses.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303474</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307804</id>
	<title>Re:Meh. I'm holding out for a kilocore.</title>
	<author>phishtahko</author>
	<datestamp>1259606640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>ULTRACORE!!!</htmltext>
<tokenext>ULTRACORE ! !
!</tokentext>
<sentencetext>ULTRACORE!!
!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302910</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303444</id>
	<title>Re:Advantages over just adding more FPUs?</title>
	<author>Anonymous</author>
	<datestamp>1259578380000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>Current memory architecture has trouble keeping data fed to just 2 CPUs; unless each of the 48 cores has it's own dedicated cache and memory bus, this is a pretty useless design.</htmltext>
<tokenext>Current memory architecture has trouble keeping data fed to just 2 CPUs ; unless each of the 48 cores has it 's own dedicated cache and memory bus , this is a pretty useless design .</tokentext>
<sentencetext>Current memory architecture has trouble keeping data fed to just 2 CPUs; unless each of the 48 cores has it's own dedicated cache and memory bus, this is a pretty useless design.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304678</id>
	<title>Not the same thing</title>
	<author>Anonymous</author>
	<datestamp>1259582640000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>Sun's processors are heavily multi-threaded per core. It is an 8 core CPU where each core can handle 8 threads in hardware. Intel's solution is 48 separate cores, doesn't say how many threads per core.</p><p>The difference? Well lots of threads on one core leads to that core being well used. Ideally, you can have it such that all its execution units are always full, it is working to 100\% capacity. However it leads to slower execution per thread, since the threads are sharing a core and competing for resources.</p><p>Something like Sun's solution would be good for servers, if you have a lot of processes and you want to avoid the context switching penalty you get form going back and forth, but no process really uses all that much power. Web servers with lots of scripts and DB access and such would probably benefit from it quite a lot.</p><p>However it wouldn't be so useful for a program that tosses out multiple threads to get more power. Like say you have a 3D rendering engine and it has 4 rendering threads. If all those threads got assigned to one core, well it would run little faster than a single thread running on that core. What you want is each thread on its own core to give you, ideally, a 4x speed increase over a single thread.</p><p>So in general, with Intel's chips you see not a lot of thread per core. 1 and 2 are all they've had so far (P4s and Core i7s are 2 threads per core, Core 2s are 1 thread per core). They also have features such as the ability for a single core to boost its clock speed if the others are not being used much, to get more performance for one thread and still stay in the thermal spec. These are generally desktop or workstation oriented features. You aren't necessarily running many different apps that need power, you are running one or maybe two apps that need power.</p><p>As for this, well I don't know what they are targeting, or how many threads/core it supports.</p></htmltext>
<tokenext>Sun 's processors are heavily multi-threaded per core .
It is an 8 core CPU where each core can handle 8 threads in hardware .
Intel 's solution is 48 separate cores , does n't say how many threads per core.The difference ?
Well lots of threads on one core leads to that core being well used .
Ideally , you can have it such that all its execution units are always full , it is working to 100 \ % capacity .
However it leads to slower execution per thread , since the threads are sharing a core and competing for resources.Something like Sun 's solution would be good for servers , if you have a lot of processes and you want to avoid the context switching penalty you get form going back and forth , but no process really uses all that much power .
Web servers with lots of scripts and DB access and such would probably benefit from it quite a lot.However it would n't be so useful for a program that tosses out multiple threads to get more power .
Like say you have a 3D rendering engine and it has 4 rendering threads .
If all those threads got assigned to one core , well it would run little faster than a single thread running on that core .
What you want is each thread on its own core to give you , ideally , a 4x speed increase over a single thread.So in general , with Intel 's chips you see not a lot of thread per core .
1 and 2 are all they 've had so far ( P4s and Core i7s are 2 threads per core , Core 2s are 1 thread per core ) .
They also have features such as the ability for a single core to boost its clock speed if the others are not being used much , to get more performance for one thread and still stay in the thermal spec .
These are generally desktop or workstation oriented features .
You are n't necessarily running many different apps that need power , you are running one or maybe two apps that need power.As for this , well I do n't know what they are targeting , or how many threads/core it supports .</tokentext>
<sentencetext>Sun's processors are heavily multi-threaded per core.
It is an 8 core CPU where each core can handle 8 threads in hardware.
Intel's solution is 48 separate cores, doesn't say how many threads per core.The difference?
Well lots of threads on one core leads to that core being well used.
Ideally, you can have it such that all its execution units are always full, it is working to 100\% capacity.
However it leads to slower execution per thread, since the threads are sharing a core and competing for resources.Something like Sun's solution would be good for servers, if you have a lot of processes and you want to avoid the context switching penalty you get form going back and forth, but no process really uses all that much power.
Web servers with lots of scripts and DB access and such would probably benefit from it quite a lot.However it wouldn't be so useful for a program that tosses out multiple threads to get more power.
Like say you have a 3D rendering engine and it has 4 rendering threads.
If all those threads got assigned to one core, well it would run little faster than a single thread running on that core.
What you want is each thread on its own core to give you, ideally, a 4x speed increase over a single thread.So in general, with Intel's chips you see not a lot of thread per core.
1 and 2 are all they've had so far (P4s and Core i7s are 2 threads per core, Core 2s are 1 thread per core).
They also have features such as the ability for a single core to boost its clock speed if the others are not being used much, to get more performance for one thread and still stay in the thermal spec.
These are generally desktop or workstation oriented features.
You aren't necessarily running many different apps that need power, you are running one or maybe two apps that need power.As for this, well I don't know what they are targeting, or how many threads/core it supports.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303474</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303440</id>
	<title>Is there enugh cpu to chipset bandwith to make use</title>
	<author>Joe The Dragon</author>
	<datestamp>1259578380000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>Is there enough cpu to chipset bandwidth to make use of all this cpu power?</p></htmltext>
<tokenext>Is there enough cpu to chipset bandwidth to make use of all this cpu power ?</tokentext>
<sentencetext>Is there enough cpu to chipset bandwidth to make use of all this cpu power?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303728</id>
	<title>Re:Yet another cloud?</title>
	<author>zullnero</author>
	<datestamp>1259579400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>No, it's just that it's a hot keyword, and a whole lot of people can't be bothered to look up what it really means.  And knowing Intel pretty well, their guys most likely know full well what it is, and they took the name as a taunt to anyone who would dare consider distributing workload instead of buying more server hardware and doing it the way that benefits Intel's bottom line.</htmltext>
<tokenext>No , it 's just that it 's a hot keyword , and a whole lot of people ca n't be bothered to look up what it really means .
And knowing Intel pretty well , their guys most likely know full well what it is , and they took the name as a taunt to anyone who would dare consider distributing workload instead of buying more server hardware and doing it the way that benefits Intel 's bottom line .</tokentext>
<sentencetext>No, it's just that it's a hot keyword, and a whole lot of people can't be bothered to look up what it really means.
And knowing Intel pretty well, their guys most likely know full well what it is, and they took the name as a taunt to anyone who would dare consider distributing workload instead of buying more server hardware and doing it the way that benefits Intel's bottom line.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304618</id>
	<title>Re:Yet another cloud?</title>
	<author>Enderandrew</author>
	<datestamp>1259582460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Your post is in the cloud. Does that make your post some meta-commentary? Is my post meta-meta?</p></htmltext>
<tokenext>Your post is in the cloud .
Does that make your post some meta-commentary ?
Is my post meta-meta ?</tokentext>
<sentencetext>Your post is in the cloud.
Does that make your post some meta-commentary?
Is my post meta-meta?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307560</id>
	<title>Re:Code Name is Offensive</title>
	<author>Anonymous</author>
	<datestamp>1259603160000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>Does the fact that none* of the Apple Operating system names are of animals <b>not native to America</b>?</p><p>*After 5.1, which is "Kodiak" - which can be found in Alaska.<br>5.2 Mac OS X v10.0 "Cheetah"<br>5.3 Mac OS X v10.1 "Puma"</p></div><p>The cougar (Puma concolor), also known as <b>puma</b>, mountain lion, mountain cat, catamount or <b>panther</b>, depending on the region, is a mammal of the Felidae family, <b>native to the Americas</b>.  http://en.wikipedia.org/wiki/Cougar</p><p><div class="quote"><p>5.4 Mac OS X v10.2 "Jaguar"</p></div><p>The Jaguar (Panthera onca) is a big cat, a feline in the Panthera genus, and is the only Panthera species <b>found in the Americas</b>.  http://en.wikipedia.org/wiki/Jaguar</p><p><div class="quote"><p>5.5 Mac OS X v10.3 "Panther"</p></div><p>See cougar above.</p></div>
	</htmltext>
<tokenext>Does the fact that none * of the Apple Operating system names are of animals not native to America ?
* After 5.1 , which is " Kodiak " - which can be found in Alaska.5.2 Mac OS X v10.0 " Cheetah " 5.3 Mac OS X v10.1 " Puma " The cougar ( Puma concolor ) , also known as puma , mountain lion , mountain cat , catamount or panther , depending on the region , is a mammal of the Felidae family , native to the Americas .
http : //en.wikipedia.org/wiki/Cougar5.4 Mac OS X v10.2 " Jaguar " The Jaguar ( Panthera onca ) is a big cat , a feline in the Panthera genus , and is the only Panthera species found in the Americas .
http : //en.wikipedia.org/wiki/Jaguar5.5 Mac OS X v10.3 " Panther " See cougar above .</tokentext>
<sentencetext>Does the fact that none* of the Apple Operating system names are of animals not native to America?
*After 5.1, which is "Kodiak" - which can be found in Alaska.5.2 Mac OS X v10.0 "Cheetah"5.3 Mac OS X v10.1 "Puma"The cougar (Puma concolor), also known as puma, mountain lion, mountain cat, catamount or panther, depending on the region, is a mammal of the Felidae family, native to the Americas.
http://en.wikipedia.org/wiki/Cougar5.4 Mac OS X v10.2 "Jaguar"The Jaguar (Panthera onca) is a big cat, a feline in the Panthera genus, and is the only Panthera species found in the Americas.
http://en.wikipedia.org/wiki/Jaguar5.5 Mac OS X v10.3 "Panther"See cougar above.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303212</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307268</id>
	<title>RAM / io</title>
	<author>smash</author>
	<datestamp>1259599680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>now all we need is memory density and IO throughput to catch up.  for most server/vm deployments memory and IO are your bottlenecks.  Sure, this will be useful in niche markets such as scientific research, but a "cloud" processor it is not... without the IO and RAM to back up all those cores very few people will be able to actually make use of them in a single machine.</htmltext>
<tokenext>now all we need is memory density and IO throughput to catch up .
for most server/vm deployments memory and IO are your bottlenecks .
Sure , this will be useful in niche markets such as scientific research , but a " cloud " processor it is not... without the IO and RAM to back up all those cores very few people will be able to actually make use of them in a single machine .</tokentext>
<sentencetext>now all we need is memory density and IO throughput to catch up.
for most server/vm deployments memory and IO are your bottlenecks.
Sure, this will be useful in niche markets such as scientific research, but a "cloud" processor it is not... without the IO and RAM to back up all those cores very few people will be able to actually make use of them in a single machine.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306488</id>
	<title>thrashtastic</title>
	<author>funkboy</author>
	<datestamp>1259592660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>At this level of parallelism, it seems to me that the routing/switching and data management between the cores will become far more important than the raw number of cores or how fast each core is.  Very similar to cluster computing in that the topology of the cluster (and the interconnect bandwidth &amp; latency) is just as important as the power of each individual node.</p><p>Projects like Grand Central Dispatch are a good step in the right direction to making general-purpose computing reasonably multithreaded, but the chip itsself still has to deal with shuffling all that data around.  If there's any hope of keeping the pin count within modern package limitations (e.g. socket 1366) then either a solid percentage of the die's real estate will be have to be devoted to interconnect &amp; routing logic, or some serious compromises will have to be made.</p><p>BTW, can anyone explain how this architecture is substantially different from <a href="http://en.wikipedia.org/wiki/Larrabee\_(GPU)" title="wikipedia.org" rel="nofollow">Larabee</a> [wikipedia.org], which is also a whole mess of x86 cores on one die?</p></htmltext>
<tokenext>At this level of parallelism , it seems to me that the routing/switching and data management between the cores will become far more important than the raw number of cores or how fast each core is .
Very similar to cluster computing in that the topology of the cluster ( and the interconnect bandwidth &amp; latency ) is just as important as the power of each individual node.Projects like Grand Central Dispatch are a good step in the right direction to making general-purpose computing reasonably multithreaded , but the chip itsself still has to deal with shuffling all that data around .
If there 's any hope of keeping the pin count within modern package limitations ( e.g .
socket 1366 ) then either a solid percentage of the die 's real estate will be have to be devoted to interconnect &amp; routing logic , or some serious compromises will have to be made.BTW , can anyone explain how this architecture is substantially different from Larabee [ wikipedia.org ] , which is also a whole mess of x86 cores on one die ?</tokentext>
<sentencetext>At this level of parallelism, it seems to me that the routing/switching and data management between the cores will become far more important than the raw number of cores or how fast each core is.
Very similar to cluster computing in that the topology of the cluster (and the interconnect bandwidth &amp; latency) is just as important as the power of each individual node.Projects like Grand Central Dispatch are a good step in the right direction to making general-purpose computing reasonably multithreaded, but the chip itsself still has to deal with shuffling all that data around.
If there's any hope of keeping the pin count within modern package limitations (e.g.
socket 1366) then either a solid percentage of the die's real estate will be have to be devoted to interconnect &amp; routing logic, or some serious compromises will have to be made.BTW, can anyone explain how this architecture is substantially different from Larabee [wikipedia.org], which is also a whole mess of x86 cores on one die?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30332784</id>
	<title>Re:Is there enugh cpu to chipset bandwith to make</title>
	<author>narcberry</author>
	<datestamp>1259950080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>In order to achieve the highest efficiency, we keep the same instructions loaded on 47 of the processors, and reserve the full bus for the 48th.</p></htmltext>
<tokenext>In order to achieve the highest efficiency , we keep the same instructions loaded on 47 of the processors , and reserve the full bus for the 48th .</tokentext>
<sentencetext>In order to achieve the highest efficiency, we keep the same instructions loaded on 47 of the processors, and reserve the full bus for the 48th.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303440</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304344</id>
	<title>The only way it could have been called "Bangalore"</title>
	<author>Anonymous</author>
	<datestamp>1259581440000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext><p>Is if some of the cores are only allowed to perform menial tasks (they were born that way) and the rest of the cores will only do something if you slip them a little cash. Oh, and code with comments doesn't run.</p></htmltext>
<tokenext>Is if some of the cores are only allowed to perform menial tasks ( they were born that way ) and the rest of the cores will only do something if you slip them a little cash .
Oh , and code with comments does n't run .</tokentext>
<sentencetext>Is if some of the cores are only allowed to perform menial tasks (they were born that way) and the rest of the cores will only do something if you slip them a little cash.
Oh, and code with comments doesn't run.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304670</id>
	<title>Re:Code Name is Offensive</title>
	<author>Anonymous</author>
	<datestamp>1259582640000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>anyway it's in Gaelic: Bang Galore, more bangs for your buck</htmltext>
<tokenext>anyway it 's in Gaelic : Bang Galore , more bangs for your buck</tokentext>
<sentencetext>anyway it's in Gaelic: Bang Galore, more bangs for your buck</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303040</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303474</id>
	<title>Sun HAS a 64 thread processor: UltraSPARC T2</title>
	<author>IYagami</author>
	<datestamp>1259578560000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>More info at:</p><p><a href="http://www.sun.com/processors/UltraSPARC-T2/specs.xml" title="sun.com">http://www.sun.com/processors/UltraSPARC-T2/specs.xml</a> [sun.com]</p></htmltext>
<tokenext>More info at : http : //www.sun.com/processors/UltraSPARC-T2/specs.xml [ sun.com ]</tokentext>
<sentencetext>More info at:http://www.sun.com/processors/UltraSPARC-T2/specs.xml [sun.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305074</id>
	<title>Re:Advantages over just adding more FPUs?</title>
	<author>TheRaven64</author>
	<datestamp>1259584320000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext>Processors access memory via a cache.  When you load a word from memory to a register, it is loaded from cache.  If it is not already in cache, then you get a cache miss, the pipeline stalls (and runs another context on SMT chips), and the memory controller fetches a cache line of data from memory.  Cache lines are typically around 128 bytes.  Modern memory is typically connected via channel that is 64 bits wide.  That means that it takes 16 reads to fill a cache line.  If you have your memory arranged in matched pairs of modules then it can fill it in 8 pairs of reads instead, which takes half as long.<p>
On any vaguely recent non-Intel chip (including workstation and server chips for most architectures), you have a memory controller on die for each chip (sometimes for each core).  Each chip is connected to a separate set of memory.  A simple example of this is a two-way Opteron.  Each will have its own, private, memory.  If you need to access memory attached to the other processor then it has to be forwarded over the HyperTransport link (a point-to-point message passing channel that AMD uses to run a cache coherency protocol).  If your OS did a good job of scheduling, then all of the RAM allocated to a process will be on the RAM chips close to where the process is running.  </p><p>
The reason Intel and Sun are pushing fully buffered DIMMs for their new chips is that FBDIMMs use a serial channel, rather than a parallel one, for connecting the memory to the memory controller.  This means that you need fewer pins on the memory controller for connecting up a DIMM and so you can have several memory controllers on a single die without your chip turning into a porcupine.  You probably wouldn't have 48 memory controllers on a 48-core chip, but you might have six, with every 8 cores sharing a level-3 cache and a memory controller.</p></htmltext>
<tokenext>Processors access memory via a cache .
When you load a word from memory to a register , it is loaded from cache .
If it is not already in cache , then you get a cache miss , the pipeline stalls ( and runs another context on SMT chips ) , and the memory controller fetches a cache line of data from memory .
Cache lines are typically around 128 bytes .
Modern memory is typically connected via channel that is 64 bits wide .
That means that it takes 16 reads to fill a cache line .
If you have your memory arranged in matched pairs of modules then it can fill it in 8 pairs of reads instead , which takes half as long .
On any vaguely recent non-Intel chip ( including workstation and server chips for most architectures ) , you have a memory controller on die for each chip ( sometimes for each core ) .
Each chip is connected to a separate set of memory .
A simple example of this is a two-way Opteron .
Each will have its own , private , memory .
If you need to access memory attached to the other processor then it has to be forwarded over the HyperTransport link ( a point-to-point message passing channel that AMD uses to run a cache coherency protocol ) .
If your OS did a good job of scheduling , then all of the RAM allocated to a process will be on the RAM chips close to where the process is running .
The reason Intel and Sun are pushing fully buffered DIMMs for their new chips is that FBDIMMs use a serial channel , rather than a parallel one , for connecting the memory to the memory controller .
This means that you need fewer pins on the memory controller for connecting up a DIMM and so you can have several memory controllers on a single die without your chip turning into a porcupine .
You probably would n't have 48 memory controllers on a 48-core chip , but you might have six , with every 8 cores sharing a level-3 cache and a memory controller .</tokentext>
<sentencetext>Processors access memory via a cache.
When you load a word from memory to a register, it is loaded from cache.
If it is not already in cache, then you get a cache miss, the pipeline stalls (and runs another context on SMT chips), and the memory controller fetches a cache line of data from memory.
Cache lines are typically around 128 bytes.
Modern memory is typically connected via channel that is 64 bits wide.
That means that it takes 16 reads to fill a cache line.
If you have your memory arranged in matched pairs of modules then it can fill it in 8 pairs of reads instead, which takes half as long.
On any vaguely recent non-Intel chip (including workstation and server chips for most architectures), you have a memory controller on die for each chip (sometimes for each core).
Each chip is connected to a separate set of memory.
A simple example of this is a two-way Opteron.
Each will have its own, private, memory.
If you need to access memory attached to the other processor then it has to be forwarded over the HyperTransport link (a point-to-point message passing channel that AMD uses to run a cache coherency protocol).
If your OS did a good job of scheduling, then all of the RAM allocated to a process will be on the RAM chips close to where the process is running.
The reason Intel and Sun are pushing fully buffered DIMMs for their new chips is that FBDIMMs use a serial channel, rather than a parallel one, for connecting the memory to the memory controller.
This means that you need fewer pins on the memory controller for connecting up a DIMM and so you can have several memory controllers on a single die without your chip turning into a porcupine.
You probably wouldn't have 48 memory controllers on a 48-core chip, but you might have six, with every 8 cores sharing a level-3 cache and a memory controller.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303444</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303816</id>
	<title>Re:Advantages over just adding more FPUs?</title>
	<author>RyuuzakiTetsuya</author>
	<datestamp>1259579700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>webserver on a high traffic site.  Either serving up lots of db connections or a lot of http connections, either way, I can imagine this having specific uses.</p></htmltext>
<tokenext>webserver on a high traffic site .
Either serving up lots of db connections or a lot of http connections , either way , I can imagine this having specific uses .</tokentext>
<sentencetext>webserver on a high traffic site.
Either serving up lots of db connections or a lot of http connections, either way, I can imagine this having specific uses.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304354</id>
	<title>Catching up</title>
	<author>pckl300</author>
	<datestamp>1259581440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>When combined, the 48 cores approach the processing power of one i7...</htmltext>
<tokenext>When combined , the 48 cores approach the processing power of one i7.. .</tokentext>
<sentencetext>When combined, the 48 cores approach the processing power of one i7...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303632</id>
	<title>Re:Yet another cloud?</title>
	<author>Anonymous</author>
	<datestamp>1259579100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><a href="http://en.wikipedia.org/wiki/File:Cloud\_computing\_types.svg" title="wikipedia.org">http://en.wikipedia.org/wiki/File:Cloud\_computing\_types.svg</a> [wikipedia.org]</p><p>Now imagine you'd have this 'cloud CPU'  as your server at home that runs apps that you could acces with Google Chrome OS... Great family server... Or remote X and play Doom3 at work from your netbook.</p><p>Sounds interesting now?<nobr> <wbr></nobr>;)</p></htmltext>
<tokenext>http : //en.wikipedia.org/wiki/File : Cloud \ _computing \ _types.svg [ wikipedia.org ] Now imagine you 'd have this 'cloud CPU ' as your server at home that runs apps that you could acces with Google Chrome OS... Great family server... Or remote X and play Doom3 at work from your netbook.Sounds interesting now ?
; )</tokentext>
<sentencetext>http://en.wikipedia.org/wiki/File:Cloud\_computing\_types.svg [wikipedia.org]Now imagine you'd have this 'cloud CPU'  as your server at home that runs apps that you could acces with Google Chrome OS... Great family server... Or remote X and play Doom3 at work from your netbook.Sounds interesting now?
;)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307524</id>
	<title>48-way hyper threading</title>
	<author>Anonymous</author>
	<datestamp>1259602680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Does this chip have HT? I suspect that Intel, as usual, hide facts about this things. Maybe they "forgot" to tell everyone that the cores was *logical* ones?<br>48 physical cores+2x HT for each cores would have been nice though, but they would have marketed it like a 96 processor chip for sure then.</p></htmltext>
<tokenext>Does this chip have HT ?
I suspect that Intel , as usual , hide facts about this things .
Maybe they " forgot " to tell everyone that the cores was * logical * ones ? 48 physical cores + 2x HT for each cores would have been nice though , but they would have marketed it like a 96 processor chip for sure then .</tokentext>
<sentencetext>Does this chip have HT?
I suspect that Intel, as usual, hide facts about this things.
Maybe they "forgot" to tell everyone that the cores was *logical* ones?48 physical cores+2x HT for each cores would have been nice though, but they would have marketed it like a 96 processor chip for sure then.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304100</id>
	<title>Re:Yet another cloud?</title>
	<author>DragonWriter</author>
	<datestamp>1259580600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>Why is everything called cloud these days? Yet another du jour buzzword. Is this really justified here?</p></div></blockquote><p>Sure, as one of the main uses of a 48-core processor, I would expect, is to be able to dynamically provision slices of it as different logical machines, which is what cloud computing is. So calling this a "cloud" processor makes as much sense as calling two different subsets of the Atom line "Netbook" and "Nettop" processors, after the kind of use for which they are intended.</p></div>
	</htmltext>
<tokenext>Why is everything called cloud these days ?
Yet another du jour buzzword .
Is this really justified here ? Sure , as one of the main uses of a 48-core processor , I would expect , is to be able to dynamically provision slices of it as different logical machines , which is what cloud computing is .
So calling this a " cloud " processor makes as much sense as calling two different subsets of the Atom line " Netbook " and " Nettop " processors , after the kind of use for which they are intended .</tokentext>
<sentencetext>Why is everything called cloud these days?
Yet another du jour buzzword.
Is this really justified here?Sure, as one of the main uses of a 48-core processor, I would expect, is to be able to dynamically provision slices of it as different logical machines, which is what cloud computing is.
So calling this a "cloud" processor makes as much sense as calling two different subsets of the Atom line "Netbook" and "Nettop" processors, after the kind of use for which they are intended.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062</id>
	<title>Yet another cloud?</title>
	<author>Mortiss</author>
	<datestamp>1259577360000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>Why is everything called cloud these days? Yet another du jour buzzword. Is this really justified here?</p></htmltext>
<tokenext>Why is everything called cloud these days ?
Yet another du jour buzzword .
Is this really justified here ?</tokentext>
<sentencetext>Why is everything called cloud these days?
Yet another du jour buzzword.
Is this really justified here?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306570</id>
	<title>Re:Yet another cloud?</title>
	<author>dbIII</author>
	<datestamp>1259593380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>It makes sense here.<br>The chip turns into a cloud if you apply power without a heatsink.</htmltext>
<tokenext>It makes sense here.The chip turns into a cloud if you apply power without a heatsink .</tokentext>
<sentencetext>It makes sense here.The chip turns into a cloud if you apply power without a heatsink.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302918</id>
	<title>Codenames</title>
	<author>Anonymous</author>
	<datestamp>1259576940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Why can companies not come up with decent code names.  For instance, this would be the perfect case for it being codenamed "Beowulf".</htmltext>
<tokenext>Why can companies not come up with decent code names .
For instance , this would be the perfect case for it being codenamed " Beowulf " .</tokentext>
<sentencetext>Why can companies not come up with decent code names.
For instance, this would be the perfect case for it being codenamed "Beowulf".</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308004</id>
	<title>Bangalore == Humpalot</title>
	<author>FreakyGreenLeaky</author>
	<datestamp>1259609760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I'm probably the only one, being married and denied sex, but Bangalore reminds me of Humpalot, Ivana Humpalot.</p></htmltext>
<tokenext>I 'm probably the only one , being married and denied sex , but Bangalore reminds me of Humpalot , Ivana Humpalot .</tokentext>
<sentencetext>I'm probably the only one, being married and denied sex, but Bangalore reminds me of Humpalot, Ivana Humpalot.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306190</id>
	<title>Re:48 is sufficient for most Ph.D. dissertations.</title>
	<author>erikscott</author>
	<datestamp>1259590200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>But if they didn't start two years ago, their stuff won't be publishable by the time it's finished.  48 cores isn't really <i>jack</i> these days.  To get on the Top 500 list, you need, very roughly, &gt;2K cores.  It's still a big, expensive game to play in, and if your institution doesn't have a pretty big machine, you need to look at the solicitations from the national labs and get some processor-hours there.</htmltext>
<tokenext>But if they did n't start two years ago , their stuff wo n't be publishable by the time it 's finished .
48 cores is n't really jack these days .
To get on the Top 500 list , you need , very roughly , &gt; 2K cores .
It 's still a big , expensive game to play in , and if your institution does n't have a pretty big machine , you need to look at the solicitations from the national labs and get some processor-hours there .</tokentext>
<sentencetext>But if they didn't start two years ago, their stuff won't be publishable by the time it's finished.
48 cores isn't really jack these days.
To get on the Top 500 list, you need, very roughly, &gt;2K cores.
It's still a big, expensive game to play in, and if your institution doesn't have a pretty big machine, you need to look at the solicitations from the national labs and get some processor-hours there.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303436</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928</id>
	<title>Code Name is Offensive</title>
	<author>Anonymous</author>
	<datestamp>1259576940000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>Intel an American company, with the American economy in the shape it's in, I am offended at the codename Bangalore.</p></htmltext>
<tokenext>Intel an American company , with the American economy in the shape it 's in , I am offended at the codename Bangalore .</tokentext>
<sentencetext>Intel an American company, with the American economy in the shape it's in, I am offended at the codename Bangalore.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303436</id>
	<title>48 is sufficient for most Ph.D. dissertations.</title>
	<author>Anonymous</author>
	<datestamp>1259578380000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext>A big market for this chip is the computer-science department of 2nd-tier universities like the University of California-Santa Barbara (UCSB).
<p>
Unlike Stanford University, UCSB lacks the money to build a full-blown multiprocessor system.  If UCSB had such a system back in the 1990s, then UCSB would likely have produced as much multiprocessor research as Stanford University.
</p><p>
This 48-core processor chip, due to the fact that it will eventually be a commercial product mass-produced by the millions of units, will be economically cheap.  This chip will enable UCSB to build or buy a cheap multiprocessor system.
</p><p>
A bunch of graduate students is already salivating at the prospect.  They are drooling.</p></htmltext>
<tokenext>A big market for this chip is the computer-science department of 2nd-tier universities like the University of California-Santa Barbara ( UCSB ) .
Unlike Stanford University , UCSB lacks the money to build a full-blown multiprocessor system .
If UCSB had such a system back in the 1990s , then UCSB would likely have produced as much multiprocessor research as Stanford University .
This 48-core processor chip , due to the fact that it will eventually be a commercial product mass-produced by the millions of units , will be economically cheap .
This chip will enable UCSB to build or buy a cheap multiprocessor system .
A bunch of graduate students is already salivating at the prospect .
They are drooling .</tokentext>
<sentencetext>A big market for this chip is the computer-science department of 2nd-tier universities like the University of California-Santa Barbara (UCSB).
Unlike Stanford University, UCSB lacks the money to build a full-blown multiprocessor system.
If UCSB had such a system back in the 1990s, then UCSB would likely have produced as much multiprocessor research as Stanford University.
This 48-core processor chip, due to the fact that it will eventually be a commercial product mass-produced by the millions of units, will be economically cheap.
This chip will enable UCSB to build or buy a cheap multiprocessor system.
A bunch of graduate students is already salivating at the prospect.
They are drooling.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302910</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303528</id>
	<title>Re:Advantages over just adding more FPUs?</title>
	<author>Yaztromo</author>
	<datestamp>1259578740000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p><div class="quote"><p>Can someone elaborate on why you'd want 48 full processors, rather than a processor with two (dual) or four (quad) "cores" (I'm presuming core in this case == FPU in the article).</p></div><p>Bad assumption.  In this case, we're talking about (what you would consider) a 48 core CPU.  Previous designs would have apparently contained only a small number of full processing cores, and a large number of parallel units suitable only for floating point calculations (which can be great for various types of scientific calculations and simulations).  This new design contains 48 discrete IA x86 cores.
</p><p>Seems like the type of processor <a href="http://en.wikipedia.org/wiki/Grand\_Central\_Dispatch" title="wikipedia.org">Grand Central Dispatch</a> [wikipedia.org] was designed for.
</p><p>Yaz.</p></div>
	</htmltext>
<tokenext>Can someone elaborate on why you 'd want 48 full processors , rather than a processor with two ( dual ) or four ( quad ) " cores " ( I 'm presuming core in this case = = FPU in the article ) .Bad assumption .
In this case , we 're talking about ( what you would consider ) a 48 core CPU .
Previous designs would have apparently contained only a small number of full processing cores , and a large number of parallel units suitable only for floating point calculations ( which can be great for various types of scientific calculations and simulations ) .
This new design contains 48 discrete IA x86 cores .
Seems like the type of processor Grand Central Dispatch [ wikipedia.org ] was designed for .
Yaz .</tokentext>
<sentencetext>Can someone elaborate on why you'd want 48 full processors, rather than a processor with two (dual) or four (quad) "cores" (I'm presuming core in this case == FPU in the article).Bad assumption.
In this case, we're talking about (what you would consider) a 48 core CPU.
Previous designs would have apparently contained only a small number of full processing cores, and a large number of parallel units suitable only for floating point calculations (which can be great for various types of scientific calculations and simulations).
This new design contains 48 discrete IA x86 cores.
Seems like the type of processor Grand Central Dispatch [wikipedia.org] was designed for.
Yaz.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307936</id>
	<title>fuck doing research to figure out new apps</title>
	<author>mofag</author>
	<datestamp>1259608800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>just hand it over now - we'll find uses for it.</p></htmltext>
<tokenext>just hand it over now - we 'll find uses for it .</tokentext>
<sentencetext>just hand it over now - we'll find uses for it.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308646</id>
	<title>Re:Code Name is Offensive</title>
	<author>cheekyboy</author>
	<datestamp>1259837040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Will there be an 11.0 or 10.10 called Pussy Power? or Hello Kitty</p></htmltext>
<tokenext>Will there be an 11.0 or 10.10 called Pussy Power ?
or Hello Kitty</tokentext>
<sentencetext>Will there be an 11.0 or 10.10 called Pussy Power?
or Hello Kitty</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303212</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303024</id>
	<title>Re:Code Name is Offensive</title>
	<author>Anonymous</author>
	<datestamp>1259577240000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>0</modscore>
	<htmltext><p>It makes sense, afterall.</p><p>
&nbsp; &nbsp; Intel made a new 48 core x86 chip called cloud computing on a chip.<br>
&nbsp; &nbsp; Funny thing is, cloud computer is like outsourcing your computer hardware to a bigger machine that's cheaper to use due to the rental pricings... etc...<br>
&nbsp; &nbsp; The name of the chip you ask?<br>
&nbsp; &nbsp; BANGALORE (outsourcing capital of the world)</p></htmltext>
<tokenext>It makes sense , afterall .
    Intel made a new 48 core x86 chip called cloud computing on a chip .
    Funny thing is , cloud computer is like outsourcing your computer hardware to a bigger machine that 's cheaper to use due to the rental pricings... etc.. .     The name of the chip you ask ?
    BANGALORE ( outsourcing capital of the world )</tokentext>
<sentencetext>It makes sense, afterall.
    Intel made a new 48 core x86 chip called cloud computing on a chip.
    Funny thing is, cloud computer is like outsourcing your computer hardware to a bigger machine that's cheaper to use due to the rental pricings... etc...
    The name of the chip you ask?
    BANGALORE (outsourcing capital of the world)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305554</id>
	<title>Re:Advantages over just adding more FPUs?</title>
	<author>Anonymous</author>
	<datestamp>1259586360000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>To run Ruby on Rails.</p></htmltext>
<tokenext>To run Ruby on Rails .</tokentext>
<sentencetext>To run Ruby on Rails.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303010</id>
	<title>Re:Code Name is Offensive</title>
	<author>RManning</author>
	<datestamp>1259577240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Intel an American company, with the American economy in the shape it's in, I am offended at the codename Bangalore.</p></div><p>Not to be dense, but why is that offensive?</p></div>
	</htmltext>
<tokenext>Intel an American company , with the American economy in the shape it 's in , I am offended at the codename Bangalore.Not to be dense , but why is that offensive ?</tokentext>
<sentencetext>Intel an American company, with the American economy in the shape it's in, I am offended at the codename Bangalore.Not to be dense, but why is that offensive?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304982</id>
	<title>Re:Advantages over just adding more FPUs?</title>
	<author>Korin43</author>
	<datestamp>1259583780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>You could run a pretty fast quicksort.. Not sure if that counts as 'useful'<nobr> <wbr></nobr>;)</htmltext>
<tokenext>You could run a pretty fast quicksort.. Not sure if that counts as 'useful ' ; )</tokentext>
<sentencetext>You could run a pretty fast quicksort.. Not sure if that counts as 'useful' ;)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303444</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044</id>
	<title>Advantages over just adding more FPUs?</title>
	<author>Anonymous</author>
	<datestamp>1259577300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Can someone elaborate on why you'd want 48 full processors, rather than a processor with two (dual) or four (quad) "cores" (I'm presuming core in this case == FPU in the article). Supposedly Win7's SMP support becomes much more effective at the 12-16 core thresehold.</p></htmltext>
<tokenext>Can someone elaborate on why you 'd want 48 full processors , rather than a processor with two ( dual ) or four ( quad ) " cores " ( I 'm presuming core in this case = = FPU in the article ) .
Supposedly Win7 's SMP support becomes much more effective at the 12-16 core thresehold .</tokentext>
<sentencetext>Can someone elaborate on why you'd want 48 full processors, rather than a processor with two (dual) or four (quad) "cores" (I'm presuming core in this case == FPU in the article).
Supposedly Win7's SMP support becomes much more effective at the 12-16 core thresehold.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305540</id>
	<title>And yet it's still...</title>
	<author>Hurricane78</author>
	<datestamp>1259586300000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>...the crappy x86 architecture.</p><p>Oh what people with no spine can achieve... Like no change at all because of fear of not being loved anymore. Or like adding Clippy to your Office suite for the same reason. Or imitating MS Office with your Office suite just to be loved. Or imitating the main OS, for the same reason.</p><p>Instead of having the balls to stand behind what you think for a decade: &ldquo;Oh boy is that a piece of outdated shit, I wish we could replace it by something that actually fits the decade!&rdquo;</p><p>P.S.: No, mentioning a <em>bad</em> architecture, like Itanium, is not going to put a dent into that argument. ^^ Just like acting as if switching to a good architecture and making a clear cut would be mutually exclusive, which they are not. Also even a great concept can fall, if those who should support it, have no spine, and cave in to the retards and uninformed, despite knowing that it&rsquo;s a great concept.</p></htmltext>
<tokenext>...the crappy x86 architecture.Oh what people with no spine can achieve... Like no change at all because of fear of not being loved anymore .
Or like adding Clippy to your Office suite for the same reason .
Or imitating MS Office with your Office suite just to be loved .
Or imitating the main OS , for the same reason.Instead of having the balls to stand behind what you think for a decade :    Oh boy is that a piece of outdated shit , I wish we could replace it by something that actually fits the decade !    P.S .
: No , mentioning a bad architecture , like Itanium , is not going to put a dent into that argument .
^ ^ Just like acting as if switching to a good architecture and making a clear cut would be mutually exclusive , which they are not .
Also even a great concept can fall , if those who should support it , have no spine , and cave in to the retards and uninformed , despite knowing that it    s a great concept .</tokentext>
<sentencetext>...the crappy x86 architecture.Oh what people with no spine can achieve... Like no change at all because of fear of not being loved anymore.
Or like adding Clippy to your Office suite for the same reason.
Or imitating MS Office with your Office suite just to be loved.
Or imitating the main OS, for the same reason.Instead of having the balls to stand behind what you think for a decade: “Oh boy is that a piece of outdated shit, I wish we could replace it by something that actually fits the decade!”P.S.
: No, mentioning a bad architecture, like Itanium, is not going to put a dent into that argument.
^^ Just like acting as if switching to a good architecture and making a clear cut would be mutually exclusive, which they are not.
Also even a great concept can fall, if those who should support it, have no spine, and cave in to the retards and uninformed, despite knowing that it’s a great concept.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304752</id>
	<title>Stupid questions</title>
	<author>rumblin'rabbit</author>
	<datestamp>1259582880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>How many gigaflops will this sucker do? How is parallel programming done? Is it standard multi-threading, or something else? What's the expected cost of these babies?

Bottom line me here.</htmltext>
<tokenext>How many gigaflops will this sucker do ?
How is parallel programming done ?
Is it standard multi-threading , or something else ?
What 's the expected cost of these babies ?
Bottom line me here .</tokentext>
<sentencetext>How many gigaflops will this sucker do?
How is parallel programming done?
Is it standard multi-threading, or something else?
What's the expected cost of these babies?
Bottom line me here.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307460</id>
	<title>Re:Idle benchmarks</title>
	<author>Anonymous</author>
	<datestamp>1259602020000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>And why not 100 so you can have a true 99\% idle time?  Tilera's got you covered.  http://www.internetnews.com/hardware/article.php/3845421</p></htmltext>
<tokenext>And why not 100 so you can have a true 99 \ % idle time ?
Tilera 's got you covered .
http : //www.internetnews.com/hardware/article.php/3845421</tokentext>
<sentencetext>And why not 100 so you can have a true 99\% idle time?
Tilera's got you covered.
http://www.internetnews.com/hardware/article.php/3845421</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303154</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306268</id>
	<title>Re:Code Name is Offensive</title>
	<author>Anonymous</author>
	<datestamp>1259590920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Man, if that offends you, you must be offended quite often.   Do you really think its that important that you should be offended?

Do you think things are so fragile that a mere product code name is important?</htmltext>
<tokenext>Man , if that offends you , you must be offended quite often .
Do you really think its that important that you should be offended ?
Do you think things are so fragile that a mere product code name is important ?</tokentext>
<sentencetext>Man, if that offends you, you must be offended quite often.
Do you really think its that important that you should be offended?
Do you think things are so fragile that a mere product code name is important?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30309262</id>
	<title>Re:Code Name is Offensive</title>
	<author>CharlyFoxtrot</author>
	<datestamp>1259847660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>In the hope of staving off further controversy the codename has now been changed to Pussygalore.</p></htmltext>
<tokenext>In the hope of staving off further controversy the codename has now been changed to Pussygalore .</tokentext>
<sentencetext>In the hope of staving off further controversy the codename has now been changed to Pussygalore.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306552</id>
	<title>Re:Meh. I'm holding out for a kilocore.</title>
	<author>jimmydevice</author>
	<datestamp>1259593200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>When the designers figure out that simple Multi-core architectures with lots of external/internal bandwidth and fixed partitioning of processes can we finally get on with the move from the cpu centric / one processor mindset.</htmltext>
<tokenext>When the designers figure out that simple Multi-core architectures with lots of external/internal bandwidth and fixed partitioning of processes can we finally get on with the move from the cpu centric / one processor mindset .</tokentext>
<sentencetext>When the designers figure out that simple Multi-core architectures with lots of external/internal bandwidth and fixed partitioning of processes can we finally get on with the move from the cpu centric / one processor mindset.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305572</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303700</id>
	<title>Re:Yet another cloud?</title>
	<author>hazydave</author>
	<datestamp>1259579280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>They're Intel... they have this buzzword department, and those kiddies have to make a living, too. Remember the Intel Pentium 4 "Netburst" architecture. Nothing whatsoever to do with nets, networking, the internet, etc.... other than the fact Intel Marketroids were trying to convince all the Mundanes (Muggles, to you kiddies) that this CPU would magically make their internet go faster. Yup, that's it.. not the fact you're on a frickin' POTS modem.</p></htmltext>
<tokenext>They 're Intel... they have this buzzword department , and those kiddies have to make a living , too .
Remember the Intel Pentium 4 " Netburst " architecture .
Nothing whatsoever to do with nets , networking , the internet , etc.... other than the fact Intel Marketroids were trying to convince all the Mundanes ( Muggles , to you kiddies ) that this CPU would magically make their internet go faster .
Yup , that 's it.. not the fact you 're on a frickin ' POTS modem .</tokentext>
<sentencetext>They're Intel... they have this buzzword department, and those kiddies have to make a living, too.
Remember the Intel Pentium 4 "Netburst" architecture.
Nothing whatsoever to do with nets, networking, the internet, etc.... other than the fact Intel Marketroids were trying to convince all the Mundanes (Muggles, to you kiddies) that this CPU would magically make their internet go faster.
Yup, that's it.. not the fact you're on a frickin' POTS modem.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303068</id>
	<title>Re:Code Name is Offensive</title>
	<author>\_merlin</author>
	<datestamp>1259577360000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>0</modscore>
	<htmltext><p>Insightful WTF?  If you get offended that easily, you'd better:</p><ol><li>Not come out from your basement, lest you see something being worth upset over</li><li>Go running to mummy so she can make it better</li><li>Or grow up</li></ol></htmltext>
<tokenext>Insightful WTF ?
If you get offended that easily , you 'd better : Not come out from your basement , lest you see something being worth upset overGo running to mummy so she can make it betterOr grow up</tokentext>
<sentencetext>Insightful WTF?
If you get offended that easily, you'd better:Not come out from your basement, lest you see something being worth upset overGo running to mummy so she can make it betterOr grow up</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303316</id>
	<title>Re:Yet another cloud?</title>
	<author>Lord Ender</author>
	<datestamp>1259578020000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>The term "cloud" is over-used, but a 48-core chip is certainly a good match for anyone who uses virtualization, and cloud-style data services are absolutely big users of virtualization.</p><p>Cloud computing is certainly a big deal. I recently explained to my boss that instead of spending weeks going through tickets, bureaucracy, approvals, and procurement to get a server in our own datacenter, we could go to Amazon, type the credit card number, and be up-and-running with a few clicks!</p><p>I don't know if he understood exactly what cloud computing *is*, but he knows it is important and will have a major impact on IT. So when someone mentions the word "cloud" he listens. Marketers are aware of this sort of thing, so they deliberately use these terms as liberally as possible.</p></htmltext>
<tokenext>The term " cloud " is over-used , but a 48-core chip is certainly a good match for anyone who uses virtualization , and cloud-style data services are absolutely big users of virtualization.Cloud computing is certainly a big deal .
I recently explained to my boss that instead of spending weeks going through tickets , bureaucracy , approvals , and procurement to get a server in our own datacenter , we could go to Amazon , type the credit card number , and be up-and-running with a few clicks ! I do n't know if he understood exactly what cloud computing * is * , but he knows it is important and will have a major impact on IT .
So when someone mentions the word " cloud " he listens .
Marketers are aware of this sort of thing , so they deliberately use these terms as liberally as possible .</tokentext>
<sentencetext>The term "cloud" is over-used, but a 48-core chip is certainly a good match for anyone who uses virtualization, and cloud-style data services are absolutely big users of virtualization.Cloud computing is certainly a big deal.
I recently explained to my boss that instead of spending weeks going through tickets, bureaucracy, approvals, and procurement to get a server in our own datacenter, we could go to Amazon, type the credit card number, and be up-and-running with a few clicks!I don't know if he understood exactly what cloud computing *is*, but he knows it is important and will have a major impact on IT.
So when someone mentions the word "cloud" he listens.
Marketers are aware of this sort of thing, so they deliberately use these terms as liberally as possible.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308396</id>
	<title>Now with 10 blades!</title>
	<author>Anonymous</author>
	<datestamp>1259832840000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Hey, if it works for Gillette, I say go for it</p></htmltext>
<tokenext>Hey , if it works for Gillette , I say go for it</tokentext>
<sentencetext>Hey, if it works for Gillette, I say go for it</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303418</id>
	<title>New GPU for Crysis 2??</title>
	<author>capnhowdy24</author>
	<datestamp>1259578320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Maybe if Nvidia partners with Intel and develops a new GPU out of this, it will handle Crysis 2!</htmltext>
<tokenext>Maybe if Nvidia partners with Intel and develops a new GPU out of this , it will handle Crysis 2 !</tokentext>
<sentencetext>Maybe if Nvidia partners with Intel and develops a new GPU out of this, it will handle Crysis 2!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305106</id>
	<title>Re:Is there enugh cpu to chipset bandwith to make</title>
	<author>Angst Badger</author>
	<datestamp>1259584440000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p><div class="quote"><p>Is there enough cpu to chipset bandwidth to make use of all this cpu power?</p></div><p>That's really going to depend on the intended use. And on whether the intended use involves problems that a) can be efficiently parallelized, and more importantly, b) actually have been efficiently parallelized. But unless each core gets its own memory bus and its own dedicated memory with its own cache, I rather expect that the only things that are going to be parallelized to their maximum potential are wait states. All that said, it will still probably run faster than a two- or four-core CPU for many tasks, but it won't be running 48 times faster. I would not, however, refuse a manufacturer's sample if one was handed to me.<nobr> <wbr></nobr>;)</p><p>On the positive side, if this beast actually makes it to market, it might help spur the development of new parallel software.</p></div>
	</htmltext>
<tokenext>Is there enough cpu to chipset bandwidth to make use of all this cpu power ? That 's really going to depend on the intended use .
And on whether the intended use involves problems that a ) can be efficiently parallelized , and more importantly , b ) actually have been efficiently parallelized .
But unless each core gets its own memory bus and its own dedicated memory with its own cache , I rather expect that the only things that are going to be parallelized to their maximum potential are wait states .
All that said , it will still probably run faster than a two- or four-core CPU for many tasks , but it wo n't be running 48 times faster .
I would not , however , refuse a manufacturer 's sample if one was handed to me .
; ) On the positive side , if this beast actually makes it to market , it might help spur the development of new parallel software .</tokentext>
<sentencetext>Is there enough cpu to chipset bandwidth to make use of all this cpu power?That's really going to depend on the intended use.
And on whether the intended use involves problems that a) can be efficiently parallelized, and more importantly, b) actually have been efficiently parallelized.
But unless each core gets its own memory bus and its own dedicated memory with its own cache, I rather expect that the only things that are going to be parallelized to their maximum potential are wait states.
All that said, it will still probably run faster than a two- or four-core CPU for many tasks, but it won't be running 48 times faster.
I would not, however, refuse a manufacturer's sample if one was handed to me.
;)On the positive side, if this beast actually makes it to market, it might help spur the development of new parallel software.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303440</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308068</id>
	<title>Re:Yet another cloud?</title>
	<author>FreakyGreenLeaky</author>
	<datestamp>1259610540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It's like this "ICT" crap.  IT wasn't hip enough anymore, so some (marketing, probably) dummy somewhere decided to splice in Communication.  It's like they feel the need to give existing technology a fresh coat of paint every few years.</p></htmltext>
<tokenext>It 's like this " ICT " crap .
IT was n't hip enough anymore , so some ( marketing , probably ) dummy somewhere decided to splice in Communication .
It 's like they feel the need to give existing technology a fresh coat of paint every few years .</tokentext>
<sentencetext>It's like this "ICT" crap.
IT wasn't hip enough anymore, so some (marketing, probably) dummy somewhere decided to splice in Communication.
It's like they feel the need to give existing technology a fresh coat of paint every few years.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30309592</id>
	<title>Re:Windows 12</title>
	<author>Anonymous</author>
	<datestamp>1259851980000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>That would be a funny name for Windows, at least for portuguese speaking people.<br>Hint: 12 = doze.</p></htmltext>
<tokenext>That would be a funny name for Windows , at least for portuguese speaking people.Hint : 12 = doze .</tokentext>
<sentencetext>That would be a funny name for Windows, at least for portuguese speaking people.Hint: 12 = doze.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302962</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303326</id>
	<title>Re:Idle benchmarks</title>
	<author>olsmeister</author>
	<datestamp>1259578080000</datestamp>
	<modclass>None</modclass>
	<modscore>2</modscore>
	<htmltext>So would this have saved that guy's ass who spent $1M in electricity running SETI@Home on the school's computers?</htmltext>
<tokenext>So would this have saved that guy 's ass who spent $ 1M in electricity running SETI @ Home on the school 's computers ?</tokentext>
<sentencetext>So would this have saved that guy's ass who spent $1M in electricity running SETI@Home on the school's computers?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303154</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306498</id>
	<title>INMOS Transputer was right after all...</title>
	<author>Anonymous</author>
	<datestamp>1259592780000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Imitation *is* the sincerest form of flattery.</p></htmltext>
<tokenext>Imitation * is * the sincerest form of flattery .</tokentext>
<sentencetext>Imitation *is* the sincerest form of flattery.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305810</id>
	<title>Re:Idle benchmarks</title>
	<author>ascari</author>
	<datestamp>1259587800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Not if you have one of them Dell laptops!</htmltext>
<tokenext>Not if you have one of them Dell laptops !</tokentext>
<sentencetext>Not if you have one of them Dell laptops!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303154</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303248</id>
	<title>Re:Code Name is Offensive</title>
	<author>jcnnghm</author>
	<datestamp>1259577840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>How awful of them to use the name of San Fransisco's sister city, the "Silicon Valley" of India, as a product codename.  Were you equally offended when Ibex Peak, Tylersburg, Alviso, Calistoga, Lakeport, Broadwater, Eaglelake, Crestline and Cantiga were used as codenames?</p><p>
&nbsp; You don't need to get your panties in a twist over this.  Although it is worth mentioning that it makes you look like a racist when you assume that an innocuous naming decision is some form of racial bigotry or social commentary.</p></htmltext>
<tokenext>How awful of them to use the name of San Fransisco 's sister city , the " Silicon Valley " of India , as a product codename .
Were you equally offended when Ibex Peak , Tylersburg , Alviso , Calistoga , Lakeport , Broadwater , Eaglelake , Crestline and Cantiga were used as codenames ?
  You do n't need to get your panties in a twist over this .
Although it is worth mentioning that it makes you look like a racist when you assume that an innocuous naming decision is some form of racial bigotry or social commentary .</tokentext>
<sentencetext>How awful of them to use the name of San Fransisco's sister city, the "Silicon Valley" of India, as a product codename.
Were you equally offended when Ibex Peak, Tylersburg, Alviso, Calistoga, Lakeport, Broadwater, Eaglelake, Crestline and Cantiga were used as codenames?
  You don't need to get your panties in a twist over this.
Although it is worth mentioning that it makes you look like a racist when you assume that an innocuous naming decision is some form of racial bigotry or social commentary.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30346290</id>
	<title>Re:And yet it's still...</title>
	<author>Hurricane78</author>
	<datestamp>1260097320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>-1, Troll? Yeah right.</p><p>And yet another &ldquo;I don&rsquo;t get the depth of what you said in the slightest, and don&rsquo;t even want to. Because I have no experience in life and never thought about anything deeply. But this goes against my primitive beliefs that I cling to, to give my weak life meaning. So I disagree and would like to censor&rdquo;.</p><p>Weak, people... Weak!</p></htmltext>
<tokenext>-1 , Troll ?
Yeah right.And yet another    I don    t get the depth of what you said in the slightest , and don    t even want to .
Because I have no experience in life and never thought about anything deeply .
But this goes against my primitive beliefs that I cling to , to give my weak life meaning .
So I disagree and would like to censor    .Weak , people... Weak !</tokentext>
<sentencetext>-1, Troll?
Yeah right.And yet another “I don’t get the depth of what you said in the slightest, and don’t even want to.
Because I have no experience in life and never thought about anything deeply.
But this goes against my primitive beliefs that I cling to, to give my weak life meaning.
So I disagree and would like to censor”.Weak, people... Weak!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305540</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307288</id>
	<title>Re:Code Name is Offensive</title>
	<author>Anonymous</author>
	<datestamp>1259599860000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><p>Firstly stop being xenophobic.<br>Maybe the name is Bangalore because of this?<br><a href="http://nextbigfuture.com/2009/12/intel-makes-single-chip-cloud-computer.html" title="nextbigfuture.com" rel="nofollow">http://nextbigfuture.com/2009/12/intel-makes-single-chip-cloud-computer.html</a> [nextbigfuture.com]<br>"This represents the latest achievement from Intel's Tera-scale Computing Research Program. The research was co-led by Intel Labs Bangalore, India, Intel Labs Braunschweig, Germany and Intel Labs researchers in the United States. "<br>And Intel is an international company headquartered in the US. Intel gets just 20\% of its revenue from the Americas<br><a href="http://www.forbes.com/feeds/businesswire/2009/10/13/businesswire130140595.html" title="forbes.com" rel="nofollow">http://www.forbes.com/feeds/businesswire/2009/10/13/businesswire130140595.html</a> [forbes.com]</p><p>And Bangalore has nothing to do with the current or the previous US recession. India imports more from the US than it exports to the US. Hence the US has a trade surplus with India. The current crisis was caused by reckless behavior by American financial institutions and the American housing bubble and it has affected the rest of the world.</p><p>Stop being so driven by hatred and country sentiments. We all live in this same world, are humans, dependant on each other and deserve respect from all other human beings. Hatred is so 2008.... grow up.</p></htmltext>
<tokenext>Firstly stop being xenophobic.Maybe the name is Bangalore because of this ? http : //nextbigfuture.com/2009/12/intel-makes-single-chip-cloud-computer.html [ nextbigfuture.com ] " This represents the latest achievement from Intel 's Tera-scale Computing Research Program .
The research was co-led by Intel Labs Bangalore , India , Intel Labs Braunschweig , Germany and Intel Labs researchers in the United States .
" And Intel is an international company headquartered in the US .
Intel gets just 20 \ % of its revenue from the Americashttp : //www.forbes.com/feeds/businesswire/2009/10/13/businesswire130140595.html [ forbes.com ] And Bangalore has nothing to do with the current or the previous US recession .
India imports more from the US than it exports to the US .
Hence the US has a trade surplus with India .
The current crisis was caused by reckless behavior by American financial institutions and the American housing bubble and it has affected the rest of the world.Stop being so driven by hatred and country sentiments .
We all live in this same world , are humans , dependant on each other and deserve respect from all other human beings .
Hatred is so 2008.... grow up .</tokentext>
<sentencetext>Firstly stop being xenophobic.Maybe the name is Bangalore because of this?http://nextbigfuture.com/2009/12/intel-makes-single-chip-cloud-computer.html [nextbigfuture.com]"This represents the latest achievement from Intel's Tera-scale Computing Research Program.
The research was co-led by Intel Labs Bangalore, India, Intel Labs Braunschweig, Germany and Intel Labs researchers in the United States.
"And Intel is an international company headquartered in the US.
Intel gets just 20\% of its revenue from the Americashttp://www.forbes.com/feeds/businesswire/2009/10/13/businesswire130140595.html [forbes.com]And Bangalore has nothing to do with the current or the previous US recession.
India imports more from the US than it exports to the US.
Hence the US has a trade surplus with India.
The current crisis was caused by reckless behavior by American financial institutions and the American housing bubble and it has affected the rest of the world.Stop being so driven by hatred and country sentiments.
We all live in this same world, are humans, dependant on each other and deserve respect from all other human beings.
Hatred is so 2008.... grow up.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30324354</id>
	<title>Re:Yet another cloud?</title>
	<author>QuantumRiff</author>
	<datestamp>1259945460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Because Cloud Computing signifies the new Paradigm of collaboration in the digital age.  It is opening up new avenues of cohesive synergies.</p></htmltext>
<tokenext>Because Cloud Computing signifies the new Paradigm of collaboration in the digital age .
It is opening up new avenues of cohesive synergies .</tokentext>
<sentencetext>Because Cloud Computing signifies the new Paradigm of collaboration in the digital age.
It is opening up new avenues of cohesive synergies.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304046</id>
	<title>Re:Meh. I'm holding out for a kilocore.</title>
	<author>Anonymous</author>
	<datestamp>1259580360000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Sounds like a Transformer.</p><p>Optimus Prime: What's for dinner?</p><p>Mega Core: Electricity</p></htmltext>
<tokenext>Sounds like a Transformer.Optimus Prime : What 's for dinner ? Mega Core : Electricity</tokentext>
<sentencetext>Sounds like a Transformer.Optimus Prime: What's for dinner?Mega Core: Electricity</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302910</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302974</id>
	<title>'Single-chip Cloud Computer'</title>
	<author>Anonymous</author>
	<datestamp>1259577120000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>What the heck? Just, what the heck?</p></htmltext>
<tokenext>What the heck ?
Just , what the heck ?</tokentext>
<sentencetext>What the heck?
Just, what the heck?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305090</id>
	<title>Re:Synergy!</title>
	<author>atheistmonk</author>
	<datestamp>1259584380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Bingo! inb4whoosh</htmltext>
<tokenext>Bingo !
inb4whoosh</tokentext>
<sentencetext>Bingo!
inb4whoosh</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303356</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308704</id>
	<title>Re:Codenames</title>
	<author>cheekyboy</author>
	<datestamp>1259837820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>They could have choosen some names from this list.<br><a href="http://www.i-r-genius.com/rudeplaces.html" title="i-r-genius.com" rel="nofollow">http://www.i-r-genius.com/rudeplaces.html</a> [i-r-genius.com]</p><p>Lord Berkeley's Knob</p></htmltext>
<tokenext>They could have choosen some names from this list.http : //www.i-r-genius.com/rudeplaces.html [ i-r-genius.com ] Lord Berkeley 's Knob</tokentext>
<sentencetext>They could have choosen some names from this list.http://www.i-r-genius.com/rudeplaces.html [i-r-genius.com]Lord Berkeley's Knob</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303920</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303752</id>
	<title>Re:Code Name is Offensive</title>
	<author>Tetsujin</author>
	<datestamp>1259579460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p><div class="quote"><p>Intel an American company, with the American economy in the shape it's in, I am offended at the codename Bangalore.</p></div><p>As the last remaining operational Soong type android, I am offended by the name Bang-A-Lore.</p></div><p>So you're B4, then?</p><p>Well, I guess it was several years ago that you were known as B4...  What's the name you're using these days...  "Pryor", isn't it?</p></div>
	</htmltext>
<tokenext>Intel an American company , with the American economy in the shape it 's in , I am offended at the codename Bangalore.As the last remaining operational Soong type android , I am offended by the name Bang-A-Lore.So you 're B4 , then ? Well , I guess it was several years ago that you were known as B4... What 's the name you 're using these days... " Pryor " , is n't it ?</tokentext>
<sentencetext>Intel an American company, with the American economy in the shape it's in, I am offended at the codename Bangalore.As the last remaining operational Soong type android, I am offended by the name Bang-A-Lore.So you're B4, then?Well, I guess it was several years ago that you were known as B4...  What's the name you're using these days...  "Pryor", isn't it?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303040</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308694</id>
	<title>Re:Synergy!</title>
	<author>Anonymous</author>
	<datestamp>1259837640000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>it's sad that that almost makes sense,  i think you must have a PHB... or work for General Electric</p></htmltext>
<tokenext>it 's sad that that almost makes sense , i think you must have a PHB... or work for General Electric</tokentext>
<sentencetext>it's sad that that almost makes sense,  i think you must have a PHB... or work for General Electric</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303356</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303446</id>
	<title>Re:Code Name is Offensive</title>
	<author>Anonymous</author>
	<datestamp>1259578380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>One could argue a Puma is essentially the same thing as a Cougar/Mountain Lion...  Not much in the way of big cats native to the United States.</p></htmltext>
<tokenext>One could argue a Puma is essentially the same thing as a Cougar/Mountain Lion... Not much in the way of big cats native to the United States .</tokentext>
<sentencetext>One could argue a Puma is essentially the same thing as a Cougar/Mountain Lion...  Not much in the way of big cats native to the United States.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303212</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302910</id>
	<title>Meh.  I'm holding out for a kilocore.</title>
	<author>Anonymous</author>
	<datestamp>1259576880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>...or perhaps a megacore?</p></htmltext>
<tokenext>...or perhaps a megacore ?</tokentext>
<sentencetext>...or perhaps a megacore?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304820</id>
	<title>Re:Meh. I'm holding out for a kilocore.</title>
	<author>norminator</author>
	<datestamp>1259583180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>64 cores ought to be enough for anyone.</htmltext>
<tokenext>64 cores ought to be enough for anyone .</tokentext>
<sentencetext>64 cores ought to be enough for anyone.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302910</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303138</id>
	<title>Re:Advantages over just adding more FPUs?</title>
	<author>h4rr4r</author>
	<datestamp>1259577540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>For a server.<br>Probably not running windows, as linux and other *n.x type OSes support monstrous amounts of CPUs already.</p></htmltext>
<tokenext>For a server.Probably not running windows , as linux and other * n.x type OSes support monstrous amounts of CPUs already .</tokentext>
<sentencetext>For a server.Probably not running windows, as linux and other *n.x type OSes support monstrous amounts of CPUs already.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303202</id>
	<title>Re:Codenames</title>
	<author>somersault</author>
	<datestamp>1259577720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Just imagine.. a Beowulf cluster^2!</p></htmltext>
<tokenext>Just imagine.. a Beowulf cluster ^ 2 !</tokentext>
<sentencetext>Just imagine.. a Beowulf cluster^2!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302918</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304672</id>
	<title>It could just be...</title>
	<author>jonaskoelker</author>
	<datestamp>1259582640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Well, it could just be that Intel wants to sell a really, <b>really</b> big e-peen to business decision makers<nobr> <wbr></nobr>;-)</p><p>(You know, the ones with short, pointy hair)</p></htmltext>
<tokenext>Well , it could just be that Intel wants to sell a really , really big e-peen to business decision makers ; - ) ( You know , the ones with short , pointy hair )</tokentext>
<sentencetext>Well, it could just be that Intel wants to sell a really, really big e-peen to business decision makers ;-)(You know, the ones with short, pointy hair)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303440</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306254</id>
	<title>Re:Advantages over just adding more FPUs?</title>
	<author>maraist</author>
	<datestamp>1259590860000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext>What is worse is that theyve done away with cache coherence. So I dont think you can take a 48 thread mysql / java process and just scale it. You COULD use forked processes that don't share much. (ie postgres/apache/php).</htmltext>
<tokenext>What is worse is that theyve done away with cache coherence .
So I dont think you can take a 48 thread mysql / java process and just scale it .
You COULD use forked processes that do n't share much .
( ie postgres/apache/php ) .</tokentext>
<sentencetext>What is worse is that theyve done away with cache coherence.
So I dont think you can take a 48 thread mysql / java process and just scale it.
You COULD use forked processes that don't share much.
(ie postgres/apache/php).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303444</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306908</id>
	<title>Re:And yet it's still...</title>
	<author>afidel</author>
	<datestamp>1259596380000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>x86-64 is actually a pretty good architecture with a decent tradeoff between registers and instruction compactness. Since the instructions are compact you can fit more of them per RAM clock cycle which is an advantage vs a pure RISC architecture which is why POWER has come much more towards the CISC side of thing then x86 has gone towards RISC (externally, internally it's pretty much a RISC machine).</htmltext>
<tokenext>x86-64 is actually a pretty good architecture with a decent tradeoff between registers and instruction compactness .
Since the instructions are compact you can fit more of them per RAM clock cycle which is an advantage vs a pure RISC architecture which is why POWER has come much more towards the CISC side of thing then x86 has gone towards RISC ( externally , internally it 's pretty much a RISC machine ) .</tokentext>
<sentencetext>x86-64 is actually a pretty good architecture with a decent tradeoff between registers and instruction compactness.
Since the instructions are compact you can fit more of them per RAM clock cycle which is an advantage vs a pure RISC architecture which is why POWER has come much more towards the CISC side of thing then x86 has gone towards RISC (externally, internally it's pretty much a RISC machine).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305540</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303020</id>
	<title>Larrabee?</title>
	<author>Anonymous</author>
	<datestamp>1259577240000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>This seems like it would be very related to their Larrabee GPU project.</p></htmltext>
<tokenext>This seems like it would be very related to their Larrabee GPU project .</tokentext>
<sentencetext>This seems like it would be very related to their Larrabee GPU project.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303040</id>
	<title>Re:Code Name is Offensive</title>
	<author>Anonymous</author>
	<datestamp>1259577240000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext><p><div class="quote"><p>Intel an American company, with the American economy in the shape it's in, I am offended at the codename Bangalore.</p></div><p>As the last remaining operational Soong type android, I am offended by the name Bang-A-Lore.</p></div>
	</htmltext>
<tokenext>Intel an American company , with the American economy in the shape it 's in , I am offended at the codename Bangalore.As the last remaining operational Soong type android , I am offended by the name Bang-A-Lore .</tokentext>
<sentencetext>Intel an American company, with the American economy in the shape it's in, I am offended at the codename Bangalore.As the last remaining operational Soong type android, I am offended by the name Bang-A-Lore.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308882</id>
	<title>SIngle chip cloud computer?</title>
	<author>Anonymous</author>
	<datestamp>1259841180000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I'm waiting for Cloud 2.0</p></htmltext>
<tokenext>I 'm waiting for Cloud 2.0</tokentext>
<sentencetext>I'm waiting for Cloud 2.0</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306092</id>
	<title>Re:Code Name is Offensive</title>
	<author>hlge</author>
	<datestamp>1259589540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>
Bangalore &gt; lots of cheep labor for simple tasks (Simple x86 core) -&gt;  code name makes perfect sense</htmltext>
<tokenext>Bangalore &gt; lots of cheep labor for simple tasks ( Simple x86 core ) - &gt; code name makes perfect sense</tokentext>
<sentencetext>
Bangalore &gt; lots of cheep labor for simple tasks (Simple x86 core) -&gt;  code name makes perfect sense</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303920</id>
	<title>Re:Codenames</title>
	<author>azrael29a</author>
	<datestamp>1259580000000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p><div class="quote"><p>Why can companies not come up with decent code names.  For instance, this would be the perfect case for it being codenamed "Beowulf".</p></div><p>They're using geographical names (cities, places, lakes, rivers) to avoid having to register the codename as a trademark. Geographical names can't be trademarked so no one will use your codename for his trademark.</p></div>
	</htmltext>
<tokenext>Why can companies not come up with decent code names .
For instance , this would be the perfect case for it being codenamed " Beowulf " .They 're using geographical names ( cities , places , lakes , rivers ) to avoid having to register the codename as a trademark .
Geographical names ca n't be trademarked so no one will use your codename for his trademark .</tokentext>
<sentencetext>Why can companies not come up with decent code names.
For instance, this would be the perfect case for it being codenamed "Beowulf".They're using geographical names (cities, places, lakes, rivers) to avoid having to register the codename as a trademark.
Geographical names can't be trademarked so no one will use your codename for his trademark.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302918</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303336</id>
	<title>Re:Yet another cloud?</title>
	<author>MobileTatsu-NJG</author>
	<datestamp>1259578080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Why is everything called cloud these days? Yet another du jour buzzword. Is this really justified here?</p></div><p>Given that making effective use of these cores would call for engineering code to work with any number of cores, as opposed to just 2, 4, or 8, then yes it is semi-justified, especially if aimed at the server market.  I do say 'semi', though, because I partially agree with you about its silliness.</p></div>
	</htmltext>
<tokenext>Why is everything called cloud these days ?
Yet another du jour buzzword .
Is this really justified here ? Given that making effective use of these cores would call for engineering code to work with any number of cores , as opposed to just 2 , 4 , or 8 , then yes it is semi-justified , especially if aimed at the server market .
I do say 'semi ' , though , because I partially agree with you about its silliness .</tokentext>
<sentencetext>Why is everything called cloud these days?
Yet another du jour buzzword.
Is this really justified here?Given that making effective use of these cores would call for engineering code to work with any number of cores, as opposed to just 2, 4, or 8, then yes it is semi-justified, especially if aimed at the server market.
I do say 'semi', though, because I partially agree with you about its silliness.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303552</id>
	<title>Re:Codenames</title>
	<author>WheelDweller</author>
	<datestamp>1259578800000</datestamp>
	<modclass>None</modclass>
	<modscore>-1</modscore>
	<htmltext><p>Well, there are a lot of computers in Bangalore...India. Most of them seem to be starving, but they compute.<nobr> <wbr></nobr>:&gt;</p></htmltext>
<tokenext>Well , there are a lot of computers in Bangalore...India .
Most of them seem to be starving , but they compute .
: &gt;</tokentext>
<sentencetext>Well, there are a lot of computers in Bangalore...India.
Most of them seem to be starving, but they compute.
:&gt;</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302918</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306834</id>
	<title>This sounds REMARKABLY like IBM POWER</title>
	<author>zevans</author>
	<datestamp>1259595720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>...doesn't it? Multiple cores, strongly interconnected? What have Intel done that is new here?</p></htmltext>
<tokenext>...does n't it ?
Multiple cores , strongly interconnected ?
What have Intel done that is new here ?</tokentext>
<sentencetext>...doesn't it?
Multiple cores, strongly interconnected?
What have Intel done that is new here?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303582</id>
	<title>Re:Advantages over just adding more FPUs?</title>
	<author>eabrek</author>
	<datestamp>1259578860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Each core has an FPU, so # of cores == # of FPU.</p><p>Adding the rest of the core effectively gives you # of instruction pointers == # of FPU.  So now, you can run more branchy code (like raytracing and physics simulations).</p></htmltext>
<tokenext>Each core has an FPU , so # of cores = = # of FPU.Adding the rest of the core effectively gives you # of instruction pointers = = # of FPU .
So now , you can run more branchy code ( like raytracing and physics simulations ) .</tokentext>
<sentencetext>Each core has an FPU, so # of cores == # of FPU.Adding the rest of the core effectively gives you # of instruction pointers == # of FPU.
So now, you can run more branchy code (like raytracing and physics simulations).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303514</id>
	<title>That's nothing, how about 64 cores for $435?</title>
	<author>Anonymous</author>
	<datestamp>1259578680000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><p>Here's the Wired story.</p><p>http://www.wired.com/gadgetlab/2007/08/64-core-chips-a/</p></htmltext>
<tokenext>Here 's the Wired story.http : //www.wired.com/gadgetlab/2007/08/64-core-chips-a/</tokentext>
<sentencetext>Here's the Wired story.http://www.wired.com/gadgetlab/2007/08/64-core-chips-a/</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305522</id>
	<title>Re:Idle benchmarks</title>
	<author>mario\_grgic</author>
	<datestamp>1259586180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Obviously you don't write software for a living. Software always expands to eat up any advances in hardware users might otherwise gain from upgrading to latest technology.</p></htmltext>
<tokenext>Obviously you do n't write software for a living .
Software always expands to eat up any advances in hardware users might otherwise gain from upgrading to latest technology .</tokentext>
<sentencetext>Obviously you don't write software for a living.
Software always expands to eat up any advances in hardware users might otherwise gain from upgrading to latest technology.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303154</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303002</id>
	<title>2D mesh networks are not technically viable</title>
	<author>Anonymous</author>
	<datestamp>1259577180000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>They may work fine in towers but drop right out of side mounted desktops and small media units.</p></htmltext>
<tokenext>They may work fine in towers but drop right out of side mounted desktops and small media units .</tokentext>
<sentencetext>They may work fine in towers but drop right out of side mounted desktops and small media units.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305556</id>
	<title>Re:Advantages over just adding more FPUs?</title>
	<author>Anonymous</author>
	<datestamp>1259586360000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>There is no cache coherency.  Each pair of cores acts is effectively a separate computer.  So Grand Central Dispatch won't do you much good.</p><p>Give that processor some more memory bandwidth and it could be a reasonble GPU.</p></htmltext>
<tokenext>There is no cache coherency .
Each pair of cores acts is effectively a separate computer .
So Grand Central Dispatch wo n't do you much good.Give that processor some more memory bandwidth and it could be a reasonble GPU .</tokentext>
<sentencetext>There is no cache coherency.
Each pair of cores acts is effectively a separate computer.
So Grand Central Dispatch won't do you much good.Give that processor some more memory bandwidth and it could be a reasonble GPU.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303528</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307736</id>
	<title>Re:'Single-chip Cloud Computer'</title>
	<author>amirulbahr</author>
	<datestamp>1259605740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Agreed.  Since when did "cloud" mean multi-core computer or CPU?</htmltext>
<tokenext>Agreed .
Since when did " cloud " mean multi-core computer or CPU ?</tokentext>
<sentencetext>Agreed.
Since when did "cloud" mean multi-core computer or CPU?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302974</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308288</id>
	<title>This sounds a lot like the Transputer</title>
	<author>crispytwo</author>
	<datestamp>1259830860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>In the late '80s a networked computer chip for multiprocessing was created <a href="http://en.wikipedia.org/wiki/Transputer" title="wikipedia.org" rel="nofollow">http://en.wikipedia.org/wiki/Transputer</a> [wikipedia.org]</p><p>It was pretty awesome and used C with some libraries or Occam2 as the programming language. You could link up as many of these babies as you would like and they would communicate between themselves for your parallel programs.</p><p>It's nice to see something similar in scale coming into the main stream more/again.</p></htmltext>
<tokenext>In the late '80s a networked computer chip for multiprocessing was created http : //en.wikipedia.org/wiki/Transputer [ wikipedia.org ] It was pretty awesome and used C with some libraries or Occam2 as the programming language .
You could link up as many of these babies as you would like and they would communicate between themselves for your parallel programs.It 's nice to see something similar in scale coming into the main stream more/again .</tokentext>
<sentencetext>In the late '80s a networked computer chip for multiprocessing was created http://en.wikipedia.org/wiki/Transputer [wikipedia.org]It was pretty awesome and used C with some libraries or Occam2 as the programming language.
You could link up as many of these babies as you would like and they would communicate between themselves for your parallel programs.It's nice to see something similar in scale coming into the main stream more/again.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304258</id>
	<title>Re:Yet another cloud?</title>
	<author>nschubach</author>
	<datestamp>1259581140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Just wait for Cloud 2.0!</p></htmltext>
<tokenext>Just wait for Cloud 2.0 !</tokentext>
<sentencetext>Just wait for Cloud 2.0!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303840</id>
	<title>Re:Is there enugh cpu to chipset bandwith to make</title>
	<author>Kjella</author>
	<datestamp>1259579760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Well, the current solutions don't seem bandwidth starved, looking at the dual-channel vs triple-channel Nehalems. With a setup like that you could probably do multiple memory controllers and NUMA too, if you needed so I imagine there'll be enough.</p></htmltext>
<tokenext>Well , the current solutions do n't seem bandwidth starved , looking at the dual-channel vs triple-channel Nehalems .
With a setup like that you could probably do multiple memory controllers and NUMA too , if you needed so I imagine there 'll be enough .</tokentext>
<sentencetext>Well, the current solutions don't seem bandwidth starved, looking at the dual-channel vs triple-channel Nehalems.
With a setup like that you could probably do multiple memory controllers and NUMA too, if you needed so I imagine there'll be enough.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303440</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30312332</id>
	<title>Re:Yet another cloud?</title>
	<author>bullfrawg</author>
	<datestamp>1259863380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Over-internet clouds (like Google's) constitute a threat to Intel's business model.  Cloud computing says you don't need to buy your own powerful CPU; someone will buy just enough CPU's to support the average load, and you'll rent them when you need them.  And you can have your computing finished sooner because of parallelism in the cloud.
<p>
Intel seems to be intentionally muddying the terminology to defend their business model.  "Here's a cloud on your desk" says you can have parallelism while owning (and buying) your own CPU.
</p><p>
As an aside, Intel is right and Google is wrong, IMHO.  Most of the money in a CPU is in the development, not in the hardware.  And chip manufacturing benefits greatly from scale.  If Google makes it so that people don't buy their own CPUs, it will only save money in the short run.  The more people rely on Google's Clouds, the more Google will have to pay for each CPU.  Meanwhile Intel can sell cheap versions of what Google needs to the consumer.  Cut out networking overhead, get a workstation . . . I mean a Cloud on your desk!
</p><p>
We've seen this pendulum swing before, and it always comes back.  Those who want computing have to pay for the development of computers somehow.  In a world in which the incremental cost is low to own your personal copy of Intel's (or AMD's or whoever's) Intellectual Property in silicon, the workstation will always win out.
</p><p>
Until silicon is replaced with something much more expensive.</p></htmltext>
<tokenext>Over-internet clouds ( like Google 's ) constitute a threat to Intel 's business model .
Cloud computing says you do n't need to buy your own powerful CPU ; someone will buy just enough CPU 's to support the average load , and you 'll rent them when you need them .
And you can have your computing finished sooner because of parallelism in the cloud .
Intel seems to be intentionally muddying the terminology to defend their business model .
" Here 's a cloud on your desk " says you can have parallelism while owning ( and buying ) your own CPU .
As an aside , Intel is right and Google is wrong , IMHO .
Most of the money in a CPU is in the development , not in the hardware .
And chip manufacturing benefits greatly from scale .
If Google makes it so that people do n't buy their own CPUs , it will only save money in the short run .
The more people rely on Google 's Clouds , the more Google will have to pay for each CPU .
Meanwhile Intel can sell cheap versions of what Google needs to the consumer .
Cut out networking overhead , get a workstation .
. .
I mean a Cloud on your desk !
We 've seen this pendulum swing before , and it always comes back .
Those who want computing have to pay for the development of computers somehow .
In a world in which the incremental cost is low to own your personal copy of Intel 's ( or AMD 's or whoever 's ) Intellectual Property in silicon , the workstation will always win out .
Until silicon is replaced with something much more expensive .</tokentext>
<sentencetext>Over-internet clouds (like Google's) constitute a threat to Intel's business model.
Cloud computing says you don't need to buy your own powerful CPU; someone will buy just enough CPU's to support the average load, and you'll rent them when you need them.
And you can have your computing finished sooner because of parallelism in the cloud.
Intel seems to be intentionally muddying the terminology to defend their business model.
"Here's a cloud on your desk" says you can have parallelism while owning (and buying) your own CPU.
As an aside, Intel is right and Google is wrong, IMHO.
Most of the money in a CPU is in the development, not in the hardware.
And chip manufacturing benefits greatly from scale.
If Google makes it so that people don't buy their own CPUs, it will only save money in the short run.
The more people rely on Google's Clouds, the more Google will have to pay for each CPU.
Meanwhile Intel can sell cheap versions of what Google needs to the consumer.
Cut out networking overhead, get a workstation .
. .
I mean a Cloud on your desk!
We've seen this pendulum swing before, and it always comes back.
Those who want computing have to pay for the development of computers somehow.
In a world in which the incremental cost is low to own your personal copy of Intel's (or AMD's or whoever's) Intellectual Property in silicon, the workstation will always win out.
Until silicon is replaced with something much more expensive.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303520</id>
	<title>Re:Advantages over just adding more FPUs?</title>
	<author>Anonymous</author>
	<datestamp>1259578680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Yes. YES! Raytracing! And emulating a D3Dn card in software (Google: pixomatic) and run the latest game with acceptable framerates.</p></htmltext>
<tokenext>Yes .
YES ! Raytracing !
And emulating a D3Dn card in software ( Google : pixomatic ) and run the latest game with acceptable framerates .</tokentext>
<sentencetext>Yes.
YES! Raytracing!
And emulating a D3Dn card in software (Google: pixomatic) and run the latest game with acceptable framerates.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305486</id>
	<title>NUMA vs SMP</title>
	<author>mario\_grgic</author>
	<datestamp>1259586120000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>In my experience Windows 7 64 bit is noticeably faster with NUMA configuration (Windows experience index is significantly higher because of improved memory throughput) and majority of application also run up to 10 \% faster.</p><p>I don't know if this is because of Nehalem Xeon CPUs having faster access to CPU local memory in NUMA configuration or if windows is also optimized for this?</p></htmltext>
<tokenext>In my experience Windows 7 64 bit is noticeably faster with NUMA configuration ( Windows experience index is significantly higher because of improved memory throughput ) and majority of application also run up to 10 \ % faster.I do n't know if this is because of Nehalem Xeon CPUs having faster access to CPU local memory in NUMA configuration or if windows is also optimized for this ?</tokentext>
<sentencetext>In my experience Windows 7 64 bit is noticeably faster with NUMA configuration (Windows experience index is significantly higher because of improved memory throughput) and majority of application also run up to 10 \% faster.I don't know if this is because of Nehalem Xeon CPUs having faster access to CPU local memory in NUMA configuration or if windows is also optimized for this?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303212</id>
	<title>Re:Code Name is Offensive</title>
	<author>Monkeedude1212</author>
	<datestamp>1259577780000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>Does the fact that none* of the Apple Operating system names are of animals not native to America?</p><p>*After 5.1, which is "Kodiak" - which can be found in Alaska.<br>5.2 Mac OS X v10.0 "Cheetah"<br>5.3 Mac OS X v10.1 "Puma"<br>5.4 Mac OS X v10.2 "Jaguar"<br>5.5 Mac OS X v10.3 "Panther"<br>5.6 Mac OS X v10.4 "Tiger"<br>5.7 Mac OS X v10.5 "Leopard"<br>5.8 Mac OS X v10.6 "Snow Leopard</p></htmltext>
<tokenext>Does the fact that none * of the Apple Operating system names are of animals not native to America ?
* After 5.1 , which is " Kodiak " - which can be found in Alaska.5.2 Mac OS X v10.0 " Cheetah " 5.3 Mac OS X v10.1 " Puma " 5.4 Mac OS X v10.2 " Jaguar " 5.5 Mac OS X v10.3 " Panther " 5.6 Mac OS X v10.4 " Tiger " 5.7 Mac OS X v10.5 " Leopard " 5.8 Mac OS X v10.6 " Snow Leopard</tokentext>
<sentencetext>Does the fact that none* of the Apple Operating system names are of animals not native to America?
*After 5.1, which is "Kodiak" - which can be found in Alaska.5.2 Mac OS X v10.0 "Cheetah"5.3 Mac OS X v10.1 "Puma"5.4 Mac OS X v10.2 "Jaguar"5.5 Mac OS X v10.3 "Panther"5.6 Mac OS X v10.4 "Tiger"5.7 Mac OS X v10.5 "Leopard"5.8 Mac OS X v10.6 "Snow Leopard</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305332</id>
	<title>Obligatory</title>
	<author>mhajicek</author>
	<datestamp>1259585400000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>640 cores aught to be enough for anybody!</htmltext>
<tokenext>640 cores aught to be enough for anybody !</tokentext>
<sentencetext>640 cores aught to be enough for anybody!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303222</id>
	<title>Re:Code Name is Offensive</title>
	<author>Anonymous</author>
	<datestamp>1259577780000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext><p>With a code name like "Bangalore", I would expect it to be a RISC based CPU that's very slow, requires instructions to be repeated and confirmed, and late to service any interrupts.</p></htmltext>
<tokenext>With a code name like " Bangalore " , I would expect it to be a RISC based CPU that 's very slow , requires instructions to be repeated and confirmed , and late to service any interrupts .</tokentext>
<sentencetext>With a code name like "Bangalore", I would expect it to be a RISC based CPU that's very slow, requires instructions to be repeated and confirmed, and late to service any interrupts.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307080</id>
	<title>Re:48 is sufficient for most Ph.D. dissertations.</title>
	<author>Anonymous</author>
	<datestamp>1259598120000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext><p>Word gets pretty slow when you hit a hundred pages with figures on a Core Duo, but you could always just use LaTeX or a file per chapter.  I managed to get my dissertation done with just two cores and my parents managed with a typewriter (although those were masters, not PhDs).</p></htmltext>
<tokenext>Word gets pretty slow when you hit a hundred pages with figures on a Core Duo , but you could always just use LaTeX or a file per chapter .
I managed to get my dissertation done with just two cores and my parents managed with a typewriter ( although those were masters , not PhDs ) .</tokentext>
<sentencetext>Word gets pretty slow when you hit a hundred pages with figures on a Core Duo, but you could always just use LaTeX or a file per chapter.
I managed to get my dissertation done with just two cores and my parents managed with a typewriter (although those were masters, not PhDs).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303436</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303154</id>
	<title>Idle benchmarks</title>
	<author>Anonymous</author>
	<datestamp>1259577600000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>With 48 processors you can have your system 98\% idle running your typical application at full speed rather than just 50\% or 75\% idle as is the norm now.<br>
&nbsp;</p></htmltext>
<tokenext>With 48 processors you can have your system 98 \ % idle running your typical application at full speed rather than just 50 \ % or 75 \ % idle as is the norm now .
 </tokentext>
<sentencetext>With 48 processors you can have your system 98\% idle running your typical application at full speed rather than just 50\% or 75\% idle as is the norm now.
 </sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305572</id>
	<title>Re:Meh. I'm holding out for a kilocore.</title>
	<author>Anonymous</author>
	<datestamp>1259586480000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext>I think it's more likely we'll see kibicores and mebicores.</htmltext>
<tokenext>I think it 's more likely we 'll see kibicores and mebicores .</tokentext>
<sentencetext>I think it's more likely we'll see kibicores and mebicores.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302910</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305082</id>
	<title>Re:Advantages over just adding more FPUs?</title>
	<author>Jeremy Erwin</author>
	<datestamp>1259584320000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Embarrassingly parallel is right. Cache coherency was sacrificed in order to up the number of cores, though I suppose a Beowulf on a chip is still useful for some things.</p></htmltext>
<tokenext>Embarrassingly parallel is right .
Cache coherency was sacrificed in order to up the number of cores , though I suppose a Beowulf on a chip is still useful for some things .</tokentext>
<sentencetext>Embarrassingly parallel is right.
Cache coherency was sacrificed in order to up the number of cores, though I suppose a Beowulf on a chip is still useful for some things.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304148</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306582</id>
	<title>Re:Yet another cloud?</title>
	<author>ground.zero.612</author>
	<datestamp>1259593500000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>Why is everything called cloud these days? Yet another du jour buzzword. Is this really justified here?</p></div><p>Cloudgate?</p></div>
	</htmltext>
<tokenext>Why is everything called cloud these days ?
Yet another du jour buzzword .
Is this really justified here ? Cloudgate ?</tokentext>
<sentencetext>Why is everything called cloud these days?
Yet another du jour buzzword.
Is this really justified here?Cloudgate?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304148</id>
	<title>Re:Advantages over just adding more FPUs?</title>
	<author>vertinox</author>
	<datestamp>1259580840000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p><i>Can someone elaborate on why you'd want 48 full processors, rather than a processor with two (dual) or four (quad) "cores" (I'm presuming core in this case == FPU in the article). Supposedly Win7's SMP support becomes much more effective at the 12-16 core thresehold.</i></p><p>The first thought comes to mind if video processing and CGI animations because those applications are <a href="http://en.wikipedia.org/wiki/Embarrassingly\_parallel" title="wikipedia.org">embarrassingly parallel</a> [wikipedia.org].</p><p>And those companies usually have the money to spend on top of the line hardware.</p><p>Eventually this will trickle down to consumer level as always and people at home can now do real time movie quality CGI on their home computers in 10 years.</p></htmltext>
<tokenext>Can someone elaborate on why you 'd want 48 full processors , rather than a processor with two ( dual ) or four ( quad ) " cores " ( I 'm presuming core in this case = = FPU in the article ) .
Supposedly Win7 's SMP support becomes much more effective at the 12-16 core thresehold.The first thought comes to mind if video processing and CGI animations because those applications are embarrassingly parallel [ wikipedia.org ] .And those companies usually have the money to spend on top of the line hardware.Eventually this will trickle down to consumer level as always and people at home can now do real time movie quality CGI on their home computers in 10 years .</tokentext>
<sentencetext>Can someone elaborate on why you'd want 48 full processors, rather than a processor with two (dual) or four (quad) "cores" (I'm presuming core in this case == FPU in the article).
Supposedly Win7's SMP support becomes much more effective at the 12-16 core thresehold.The first thought comes to mind if video processing and CGI animations because those applications are embarrassingly parallel [wikipedia.org].And those companies usually have the money to spend on top of the line hardware.Eventually this will trickle down to consumer level as always and people at home can now do real time movie quality CGI on their home computers in 10 years.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30312668</id>
	<title>Re:Advantages over just adding more FPUs?</title>
	<author>jd2112</author>
	<datestamp>1259864460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Simple: With 48 processors you can run the full Symantec utilities suite and still have a somewhat usable system, at least until the 2011 version is released anyway...</htmltext>
<tokenext>Simple : With 48 processors you can run the full Symantec utilities suite and still have a somewhat usable system , at least until the 2011 version is released anyway.. .</tokentext>
<sentencetext>Simple: With 48 processors you can run the full Symantec utilities suite and still have a somewhat usable system, at least until the 2011 version is released anyway...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304304</id>
	<title>Buzzzzzzzzword!</title>
	<author>Anonymous</author>
	<datestamp>1259581320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Single-chip Cloud Computer</p></div><p>Wow. That actually caused physical pain in my frontal lobe. Way to live the corporate buzzword stereotype, Intel.</p></div>
	</htmltext>
<tokenext>Single-chip Cloud ComputerWow .
That actually caused physical pain in my frontal lobe .
Way to live the corporate buzzword stereotype , Intel .</tokentext>
<sentencetext>Single-chip Cloud ComputerWow.
That actually caused physical pain in my frontal lobe.
Way to live the corporate buzzword stereotype, Intel.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303650</id>
	<title>Anonymous Dunster</title>
	<author>Anonymous</author>
	<datestamp>1259579160000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Maybe Bill Gates will say, "48 cores is more cores than anyone will ever need." Cor blimey.</p></htmltext>
<tokenext>Maybe Bill Gates will say , " 48 cores is more cores than anyone will ever need .
" Cor blimey .</tokentext>
<sentencetext>Maybe Bill Gates will say, "48 cores is more cores than anyone will ever need.
" Cor blimey.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308342</id>
	<title>Re:Advantages over just adding more FPUs?</title>
	<author>Anonymous</author>
	<datestamp>1259831880000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Or TBB : http://software.intel.com/en-us/intel-tbb/</p><p>http://en.wikipedia.org/wiki/Intel\_Threading\_Building\_Blocks</p></htmltext>
<tokenext>Or TBB : http : //software.intel.com/en-us/intel-tbb/http : //en.wikipedia.org/wiki/Intel \ _Threading \ _Building \ _Blocks</tokentext>
<sentencetext>Or TBB : http://software.intel.com/en-us/intel-tbb/http://en.wikipedia.org/wiki/Intel\_Threading\_Building\_Blocks</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303528</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307338</id>
	<title>Re:Code Name is Offensive</title>
	<author>Anonymous</author>
	<datestamp>1259600220000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Pumas and Jaguars are native to America.</p><p>May be not native to the gringo part of it, but there are plenty of them here at the south of the Equator.</p></htmltext>
<tokenext>Pumas and Jaguars are native to America.May be not native to the gringo part of it , but there are plenty of them here at the south of the Equator .</tokentext>
<sentencetext>Pumas and Jaguars are native to America.May be not native to the gringo part of it, but there are plenty of them here at the south of the Equator.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303212</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303856</id>
	<title>Re:'Single-chip Cloud Computer'</title>
	<author>Tetsujin</author>
	<datestamp>1259579760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>What the heck? Just, what the heck?</p></div><p>Yeah, a better code-name would have been "Lakitu"...</p><p>Damn cloud, always following me around throwing things at me...</p></div>
	</htmltext>
<tokenext>What the heck ?
Just , what the heck ? Yeah , a better code-name would have been " Lakitu " ...Damn cloud , always following me around throwing things at me.. .</tokentext>
<sentencetext>What the heck?
Just, what the heck?Yeah, a better code-name would have been "Lakitu"...Damn cloud, always following me around throwing things at me...
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302974</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308942</id>
	<title>Re:Synergy!</title>
	<author>Anonymous</author>
	<datestamp>1259842080000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I'll buy some!</p></htmltext>
<tokenext>I 'll buy some !</tokentext>
<sentencetext>I'll buy some!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303356</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303688</id>
	<title>48 what cores ?</title>
	<author>psergiu</author>
	<datestamp>1259579220000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>48 what cores ?</p><p>Will a chip with 48x 486 CPUs be of any use today ?</p><p>How much L2 cache in each core ? 64Kb ?</p></htmltext>
<tokenext>48 what cores ? Will a chip with 48x 486 CPUs be of any use today ? How much L2 cache in each core ?
64Kb ?</tokentext>
<sentencetext>48 what cores ?Will a chip with 48x 486 CPUs be of any use today ?How much L2 cache in each core ?
64Kb ?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305702</id>
	<title>Beurhg</title>
	<author>Karem Lore</author>
	<datestamp>1259587260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Wake me up when the processor is $100</p></htmltext>
<tokenext>Wake me up when the processor is $ 100</tokentext>
<sentencetext>Wake me up when the processor is $100</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304650</id>
	<title>Don't forget.</title>
	<author>jonaskoelker</author>
	<datestamp>1259582520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Don't forget to leverage turn-key best-of-breed uhh... consumer-focused... enterprise social matrix uh... what are we selling again?</p></htmltext>
<tokenext>Do n't forget to leverage turn-key best-of-breed uhh... consumer-focused... enterprise social matrix uh... what are we selling again ?</tokentext>
<sentencetext>Don't forget to leverage turn-key best-of-breed uhh... consumer-focused... enterprise social matrix uh... what are we selling again?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303356</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30319252</id>
	<title>Re:Synergy!</title>
	<author>Anonymous</author>
	<datestamp>1259847420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Cool!  Is your company having and IPO anytime soon?  This sounds like a sweet thing to invest in!</p></htmltext>
<tokenext>Cool !
Is your company having and IPO anytime soon ?
This sounds like a sweet thing to invest in !</tokentext>
<sentencetext>Cool!
Is your company having and IPO anytime soon?
This sounds like a sweet thing to invest in!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303356</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304846</id>
	<title>Awesome</title>
	<author>Anonymous</author>
	<datestamp>1259583300000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I'd be more impressed by a 6GHz quad-core though.</p></htmltext>
<tokenext>I 'd be more impressed by a 6GHz quad-core though .</tokentext>
<sentencetext>I'd be more impressed by a 6GHz quad-core though.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304006</id>
	<title>lazy</title>
	<author>trb</author>
	<datestamp>1259580240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><ol>
<li> fabricate x86 silicon wafer</li>
<li> don't bother slicing it up into separate chips</li>
<li> profit</li>
</ol></htmltext>
<tokenext>fabricate x86 silicon wafer do n't bother slicing it up into separate chips profit</tokentext>
<sentencetext>
 fabricate x86 silicon wafer
 don't bother slicing it up into separate chips
 profit
</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303284</id>
	<title>Re:Codenames</title>
	<author>revlayle</author>
	<datestamp>1259577900000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext>Could you imagine a Beowulf cluster of Beowul.... *head explodes*</htmltext>
<tokenext>Could you imagine a Beowulf cluster of Beowul.... * head explodes *</tokentext>
<sentencetext>Could you imagine a Beowulf cluster of Beowul.... *head explodes*</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302918</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307958</id>
	<title>Re:Code Name is Offensive</title>
	<author>peater</author>
	<datestamp>1259608980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Heh, yeah there was this secretary at my workplace in India who had a doc file called "Bangme.doc" which one of us noticed while shoulder surfing. Out of curiosity we opened the document while she wasn't around only to find out that it was a schedule of the partner's Bangalore Meetings. Big big let down.</htmltext>
<tokenext>Heh , yeah there was this secretary at my workplace in India who had a doc file called " Bangme.doc " which one of us noticed while shoulder surfing .
Out of curiosity we opened the document while she was n't around only to find out that it was a schedule of the partner 's Bangalore Meetings .
Big big let down .</tokentext>
<sentencetext>Heh, yeah there was this secretary at my workplace in India who had a doc file called "Bangme.doc" which one of us noticed while shoulder surfing.
Out of curiosity we opened the document while she wasn't around only to find out that it was a schedule of the partner's Bangalore Meetings.
Big big let down.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303040</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306242</id>
	<title>Re:48 is sufficient for most Ph.D. dissertations.</title>
	<author>kharchenko</author>
	<datestamp>1259590740000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><i>&gt;If UCSB had such a system back in the 1990s, then UCSB would likely have produced as much multiprocessor research as Stanford University</i> <br>
Actually, UCSB had exactly such a system in the 90's, called Meiko:
<a href="http://www.cs.ucsb.edu/~tyang/class/110B/meiko/" title="ucsb.edu">"The Department of Computer Science at UCSB purchased a 64-processor CS-2 in June 1994."</a> [ucsb.edu]</htmltext>
<tokenext>&gt; If UCSB had such a system back in the 1990s , then UCSB would likely have produced as much multiprocessor research as Stanford University Actually , UCSB had exactly such a system in the 90 's , called Meiko : " The Department of Computer Science at UCSB purchased a 64-processor CS-2 in June 1994 .
" [ ucsb.edu ]</tokentext>
<sentencetext>&gt;If UCSB had such a system back in the 1990s, then UCSB would likely have produced as much multiprocessor research as Stanford University 
Actually, UCSB had exactly such a system in the 90's, called Meiko:
"The Department of Computer Science at UCSB purchased a 64-processor CS-2 in June 1994.
" [ucsb.edu]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303436</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30312458</id>
	<title>x86?</title>
	<author>gravis777</author>
	<datestamp>1259863740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Guess I will wait for AMD to make an x64 version of it. I don't care how many cores you have, I am not going back to 32 bit! Expecially if I have 48 cores, I REALLY want to use more than 3 gig of RAM!</p></htmltext>
<tokenext>Guess I will wait for AMD to make an x64 version of it .
I do n't care how many cores you have , I am not going back to 32 bit !
Expecially if I have 48 cores , I REALLY want to use more than 3 gig of RAM !</tokentext>
<sentencetext>Guess I will wait for AMD to make an x64 version of it.
I don't care how many cores you have, I am not going back to 32 bit!
Expecially if I have 48 cores, I REALLY want to use more than 3 gig of RAM!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304254</id>
	<title>Re:Advantages over just adding more FPUs?</title>
	<author>Anonymous</author>
	<datestamp>1259581140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Multi-user systems?  If 1 user can max out a dual core processor now, what would happen if you put 48 people on that same system?</htmltext>
<tokenext>Multi-user systems ?
If 1 user can max out a dual core processor now , what would happen if you put 48 people on that same system ?</tokentext>
<sentencetext>Multi-user systems?
If 1 user can max out a dual core processor now, what would happen if you put 48 people on that same system?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305178</id>
	<title>Completely new processor design</title>
	<author>nurb432</author>
	<datestamp>1259584740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>That uses existing IA86 core technology..</p><p>Marketing guys are smoking too much 'cloud' i think.</p></htmltext>
<tokenext>That uses existing IA86 core technology..Marketing guys are smoking too much 'cloud ' i think .</tokentext>
<sentencetext>That uses existing IA86 core technology..Marketing guys are smoking too much 'cloud' i think.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303712</id>
	<title>Sounds like Sinclair's waffer scale intergration.</title>
	<author>LWATCDR</author>
	<datestamp>1259579280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It does sound a lot like it. Truth is that it is probably a lot more like the old Pentium D packages but still kind of interesting.<br>So how many Coretex A8 cores could you fit on one of these?</p></htmltext>
<tokenext>It does sound a lot like it .
Truth is that it is probably a lot more like the old Pentium D packages but still kind of interesting.So how many Coretex A8 cores could you fit on one of these ?</tokentext>
<sentencetext>It does sound a lot like it.
Truth is that it is probably a lot more like the old Pentium D packages but still kind of interesting.So how many Coretex A8 cores could you fit on one of these?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30332544</id>
	<title>Re:Synergy!</title>
	<author>Anonymous</author>
	<datestamp>1259947200000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I hope no one is playing the<nobr> <wbr></nobr>/. buzzword drinking game tonight.</p></htmltext>
<tokenext>I hope no one is playing the / .
buzzword drinking game tonight .</tokentext>
<sentencetext>I hope no one is playing the /.
buzzword drinking game tonight.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303356</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306574</id>
	<title>Re:Advantages over just adding more FPUs?</title>
	<author>afidel</author>
	<datestamp>1259593440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>2008 is certified up to 256 cores because that was the biggest box available during development, the kernel can support more easily.</htmltext>
<tokenext>2008 is certified up to 256 cores because that was the biggest box available during development , the kernel can support more easily .</tokentext>
<sentencetext>2008 is certified up to 256 cores because that was the biggest box available during development, the kernel can support more easily.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303138</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308824</id>
	<title>Re:Not the same thing</title>
	<author>Anonymous</author>
	<datestamp>1259840040000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>"Like say you have a 3D rendering engine and it has 4 rendering threads. If all those threads got assigned to one core, well it would run little faster than a single thread running on that core"</p><p>How is it possible for a multithreaded render to be faster than a single threaded render ON THE SAME CORE?</p></htmltext>
<tokenext>" Like say you have a 3D rendering engine and it has 4 rendering threads .
If all those threads got assigned to one core , well it would run little faster than a single thread running on that core " How is it possible for a multithreaded render to be faster than a single threaded render ON THE SAME CORE ?</tokentext>
<sentencetext>"Like say you have a 3D rendering engine and it has 4 rendering threads.
If all those threads got assigned to one core, well it would run little faster than a single thread running on that core"How is it possible for a multithreaded render to be faster than a single threaded render ON THE SAME CORE?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304678</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303540</id>
	<title>Re:Codenames</title>
	<author>Knara</author>
	<datestamp>1259578740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Intel has always had less than catchy code names, IMHO.</htmltext>
<tokenext>Intel has always had less than catchy code names , IMHO .</tokentext>
<sentencetext>Intel has always had less than catchy code names, IMHO.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302918</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_50</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303336
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303816
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303474
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304756
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303356
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30332544
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_40</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303474
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304678
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308824
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305540
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306776
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304618
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_74</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303040
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303752
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_65</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303444
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305074
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_70</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306582
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_41</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303248
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304258
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303356
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308694
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_55</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303440
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305106
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303444
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306254
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307056
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_62</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305486
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_79</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303632
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303582
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_47</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303160
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_49</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302910
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303436
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306242
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308914
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_80</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305554
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_63</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30324354
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_54</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302918
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303284
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_37</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305540
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306908
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_39</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303440
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30332784
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303212
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307610
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_53</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306570
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_44</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302910
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304046
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306092
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303528
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308342
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_78</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303440
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304672
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_60</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303138
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305950
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304148
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305082
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_81</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303212
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307338
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303040
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304670
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_69</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303356
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304650
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_45</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302974
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303856
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_73</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303068
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_36</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303440
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303840
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303212
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307560
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_59</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302918
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303202
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_52</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303154
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305522
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_35</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302910
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305572
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306552
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_66</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303528
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305556
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_42</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303154
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305810
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302910
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307804
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303154
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303326
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302918
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303920
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308704
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303024
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_72</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303222
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_67</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303444
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306254
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30313110
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_58</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30312332
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_71</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303728
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303356
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305090
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_57</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303520
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_48</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303138
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306574
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303444
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304982
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302918
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303540
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_64</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304254
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305540
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30346290
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303212
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303446
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30309262
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303212
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308646
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303154
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307460
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_77</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308068
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302910
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303436
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306190
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_56</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302910
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303436
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307080
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_61</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302962
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30309592
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_46</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303356
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308942
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302974
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307736
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_51</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302910
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304820
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304100
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_76</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302910
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306238
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306268
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302918
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303552
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30312668
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_75</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303700
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_38</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303356
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30319252
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303316
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_43</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307288
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303010
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_02_215207_68</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303040
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307958
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_02_215207.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302974
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307736
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303856
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_02_215207.18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303514
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_02_215207.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302928
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303024
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303160
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303010
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303212
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303446
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307610
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307560
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307338
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308646
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303040
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303752
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304670
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307958
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307288
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30309262
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306268
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303222
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303248
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303068
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306092
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_02_215207.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304304
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_02_215207.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302962
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30309592
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_02_215207.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303062
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303728
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303700
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304618
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308068
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30324354
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303336
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304258
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304100
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303316
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30312332
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306582
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303632
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306570
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_02_215207.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303020
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_02_215207.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304752
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_02_215207.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302918
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303552
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303540
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303920
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308704
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303202
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303284
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_02_215207.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303418
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_02_215207.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303356
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308942
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308694
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30319252
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305090
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30332544
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304650
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_02_215207.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303044
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305486
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303520
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304148
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305082
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304254
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303154
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305522
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307460
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305810
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303326
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303816
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303444
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306254
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307056
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30313110
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304982
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305074
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303582
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303528
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305556
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308342
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30312668
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303138
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305950
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306574
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305554
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_02_215207.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305540
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306776
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306908
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30346290
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_02_215207.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303712
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_02_215207.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307524
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_02_215207.16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30302910
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307804
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305572
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306552
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304046
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303436
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306190
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306242
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308914
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30307080
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304820
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30306238
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_02_215207.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303650
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_02_215207.17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303474
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304756
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304678
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30308824
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_02_215207.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303440
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30303840
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30332784
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30305106
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_02_215207.30304672
</commentlist>
</conversation>
