<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_10_26_0711218</id>
	<title>Tilera To Release 100-Core Processor</title>
	<author>timothy</author>
	<datestamp>1256545620000</datestamp>
	<htmltext><a href="http://www.goodgearguide.com.au/" rel="nofollow">angry tapir</a> writes <i>"Tilera has announced new general-purpose CPUs, <a href="http://www.goodgearguide.com.au/article/323692">including a 100-core chip</a>. The two-year-old startup's Tile-GX series of chips are targeted at servers and appliances that execute Web-related functions such as indexing, Web search and video search. The Gx100 100-core chip will draw close to 55 watts of power at maximum performance."</i></htmltext>
<tokenext>angry tapir writes " Tilera has announced new general-purpose CPUs , including a 100-core chip .
The two-year-old startup 's Tile-GX series of chips are targeted at servers and appliances that execute Web-related functions such as indexing , Web search and video search .
The Gx100 100-core chip will draw close to 55 watts of power at maximum performance .
"</tokentext>
<sentencetext>angry tapir writes "Tilera has announced new general-purpose CPUs, including a 100-core chip.
The two-year-old startup's Tile-GX series of chips are targeted at servers and appliances that execute Web-related functions such as indexing, Web search and video search.
The Gx100 100-core chip will draw close to 55 watts of power at maximum performance.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870641</id>
	<title>What ISA?</title>
	<author>abdulla</author>
	<datestamp>1256557800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Are these x86/x86-64 CPUs? It wasn't particularly clear to me.</htmltext>
<tokenext>Are these x86/x86-64 CPUs ?
It was n't particularly clear to me .</tokentext>
<sentencetext>Are these x86/x86-64 CPUs?
It wasn't particularly clear to me.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870325</id>
	<title>Re:Custom ISA?</title>
	<author>V!NCENT</author>
	<datestamp>1256552880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>GPU's are <b> <i>not</i> </b> massively multicored! That's marketing speak...</p></htmltext>
<tokenext>GPU 's are not massively multicored !
That 's marketing speak.. .</tokentext>
<sentencetext>GPU's are  not  massively multicored!
That's marketing speak...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870111</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870973</id>
	<title>hmm...</title>
	<author>Skizmo</author>
	<datestamp>1256562600000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext>100 cores... that means that my cpu will never go beyond '1\% busy'</htmltext>
<tokenext>100 cores... that means that my cpu will never go beyond '1 \ % busy'</tokentext>
<sentencetext>100 cores... that means that my cpu will never go beyond '1\% busy'</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871669</id>
	<title>15-bladed shaving razor</title>
	<author>kannibul</author>
	<datestamp>1256568120000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>For some reason, I read this article and immediately thought about a 15-bladed hsaving razor...

My point being that 100 cores, while it sounds impressive, you get a diminished return after a few cores. Even if software was written for multi-core use (and not enough of it is, IMO), you still can't possibly, effectively, use 100 cores...not before this processor is already extinct due to technological progress.

Even my quad core Intel CPU, hardly uses all 4 cores...and most commonly hits CPU1 for processes.</htmltext>
<tokenext>For some reason , I read this article and immediately thought about a 15-bladed hsaving razor.. . My point being that 100 cores , while it sounds impressive , you get a diminished return after a few cores .
Even if software was written for multi-core use ( and not enough of it is , IMO ) , you still ca n't possibly , effectively , use 100 cores...not before this processor is already extinct due to technological progress .
Even my quad core Intel CPU , hardly uses all 4 cores...and most commonly hits CPU1 for processes .</tokentext>
<sentencetext>For some reason, I read this article and immediately thought about a 15-bladed hsaving razor...

My point being that 100 cores, while it sounds impressive, you get a diminished return after a few cores.
Even if software was written for multi-core use (and not enough of it is, IMO), you still can't possibly, effectively, use 100 cores...not before this processor is already extinct due to technological progress.
Even my quad core Intel CPU, hardly uses all 4 cores...and most commonly hits CPU1 for processes.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870445</id>
	<title>Allow ia64 to CONFIG\_NR\_CPUS up to 4096</title>
	<author>foobsr</author>
	<datestamp>1256554260000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><a href="http://www.kernel.org/pub/linux/kernel/v2.6/snapshots/patch-2.6.27-rc2-git1.log" title="kernel.org">http://www.kernel.org/pub/linux/kernel/v2.6/snapshots/patch-2.6.27-rc2-git1.log</a> [kernel.org]
<br> <br>
CC.</htmltext>
<tokenext>http : //www.kernel.org/pub/linux/kernel/v2.6/snapshots/patch-2.6.27-rc2-git1.log [ kernel.org ] CC .</tokentext>
<sentencetext>http://www.kernel.org/pub/linux/kernel/v2.6/snapshots/patch-2.6.27-rc2-git1.log [kernel.org]
 
CC.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871863</id>
	<title>News fodder</title>
	<author>InsaneProcessor</author>
	<datestamp>1256569140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>This looks like another one of those companies that announces they "will" have a part that does "something" nobody else does and that it "will" be available someday.  When a two year old start up company makes an announcement like this, it usually means they are just looking for some fast capitalization to rip someone off.  There recently was another start up that was going after Intel's business.<br> <br>Then there was  Transmeta Corporation.</htmltext>
<tokenext>This looks like another one of those companies that announces they " will " have a part that does " something " nobody else does and that it " will " be available someday .
When a two year old start up company makes an announcement like this , it usually means they are just looking for some fast capitalization to rip someone off .
There recently was another start up that was going after Intel 's business .
Then there was Transmeta Corporation .</tokentext>
<sentencetext>This looks like another one of those companies that announces they "will" have a part that does "something" nobody else does and that it "will" be available someday.
When a two year old start up company makes an announcement like this, it usually means they are just looking for some fast capitalization to rip someone off.
There recently was another start up that was going after Intel's business.
Then there was  Transmeta Corporation.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870939</id>
	<title>Power of two is not at all necessary</title>
	<author>Sycraft-fu</author>
	<datestamp>1256562240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It is done only out of convince really. So you have your regular 1 core processor of course (2^0), next step up is a second core (2^1). Now from there, an easy step is to simply duplicate your dual core setup. You just make a second copy and put it on the same chip giving you 4 cores (2^2). This is as far as most chips go, more than 4 cores is not real common. However you might notice we have a real small sample set, we've only covered 3 powers of two, two of them by necessity. This trend thus isn't one because computers require it, just because it works out that way.</p><p>So, if you sniff around, you discover that indeed AMD makes 3 core processors. They are called the Phenom X3. Basically what happens is they designed a quad core chip. however they are having yield problems. Often enough, one of the cores fails testing, but the others work. So what they do is disable that core, and sell a 3 core product. End result works great, the OS sees 3 CPUs and uses them.</p><p>OSes don't care about specifics in terms of core numbers. Power of two core numbers are just the way it has worked out in many chips so far because we aren't dealing with large numbers. It is going to quickly go away though. Intel is going to introduce a 6-core chip next year. We are heading towards a market that will have processors with a number of cores that is convenient. What "convenient" is will depend on a lot of factors, but the divisibility of the numbers won't be one of them.</p><p>We may well start to see more odd numbered CPUs. If you design something with 100 individual units, it is much easier to disable parts if they don't work. Might see 96, 97, 98, 99, and 100 core varieties or something like that. All the same chip, just with units disabled if they fail.</p><p>GPUs have been doing this for years. They are highly parallel and often when a new high end part comes out there'll be a slightly lower end part that is a bit lower clock and with one or two of the pipelines disabled. This allows for parts that won't pass all the tests, but still mostly work, to be sold rather than thrown out.</p></htmltext>
<tokenext>It is done only out of convince really .
So you have your regular 1 core processor of course ( 2 ^ 0 ) , next step up is a second core ( 2 ^ 1 ) .
Now from there , an easy step is to simply duplicate your dual core setup .
You just make a second copy and put it on the same chip giving you 4 cores ( 2 ^ 2 ) .
This is as far as most chips go , more than 4 cores is not real common .
However you might notice we have a real small sample set , we 've only covered 3 powers of two , two of them by necessity .
This trend thus is n't one because computers require it , just because it works out that way.So , if you sniff around , you discover that indeed AMD makes 3 core processors .
They are called the Phenom X3 .
Basically what happens is they designed a quad core chip .
however they are having yield problems .
Often enough , one of the cores fails testing , but the others work .
So what they do is disable that core , and sell a 3 core product .
End result works great , the OS sees 3 CPUs and uses them.OSes do n't care about specifics in terms of core numbers .
Power of two core numbers are just the way it has worked out in many chips so far because we are n't dealing with large numbers .
It is going to quickly go away though .
Intel is going to introduce a 6-core chip next year .
We are heading towards a market that will have processors with a number of cores that is convenient .
What " convenient " is will depend on a lot of factors , but the divisibility of the numbers wo n't be one of them.We may well start to see more odd numbered CPUs .
If you design something with 100 individual units , it is much easier to disable parts if they do n't work .
Might see 96 , 97 , 98 , 99 , and 100 core varieties or something like that .
All the same chip , just with units disabled if they fail.GPUs have been doing this for years .
They are highly parallel and often when a new high end part comes out there 'll be a slightly lower end part that is a bit lower clock and with one or two of the pipelines disabled .
This allows for parts that wo n't pass all the tests , but still mostly work , to be sold rather than thrown out .</tokentext>
<sentencetext>It is done only out of convince really.
So you have your regular 1 core processor of course (2^0), next step up is a second core (2^1).
Now from there, an easy step is to simply duplicate your dual core setup.
You just make a second copy and put it on the same chip giving you 4 cores (2^2).
This is as far as most chips go, more than 4 cores is not real common.
However you might notice we have a real small sample set, we've only covered 3 powers of two, two of them by necessity.
This trend thus isn't one because computers require it, just because it works out that way.So, if you sniff around, you discover that indeed AMD makes 3 core processors.
They are called the Phenom X3.
Basically what happens is they designed a quad core chip.
however they are having yield problems.
Often enough, one of the cores fails testing, but the others work.
So what they do is disable that core, and sell a 3 core product.
End result works great, the OS sees 3 CPUs and uses them.OSes don't care about specifics in terms of core numbers.
Power of two core numbers are just the way it has worked out in many chips so far because we aren't dealing with large numbers.
It is going to quickly go away though.
Intel is going to introduce a 6-core chip next year.
We are heading towards a market that will have processors with a number of cores that is convenient.
What "convenient" is will depend on a lot of factors, but the divisibility of the numbers won't be one of them.We may well start to see more odd numbered CPUs.
If you design something with 100 individual units, it is much easier to disable parts if they don't work.
Might see 96, 97, 98, 99, and 100 core varieties or something like that.
All the same chip, just with units disabled if they fail.GPUs have been doing this for years.
They are highly parallel and often when a new high end part comes out there'll be a slightly lower end part that is a bit lower clock and with one or two of the pipelines disabled.
This allows for parts that won't pass all the tests, but still mostly work, to be sold rather than thrown out.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870387</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871589</id>
	<title>looks like</title>
	<author>nimbius</author>
	<datestamp>1256567580000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext>/proc/cpuinfo will become a small book.  on the bright side, i guarantee 100 cores meets the draft requirements for 'windows 8 capable' status.</htmltext>
<tokenext>/proc/cpuinfo will become a small book .
on the bright side , i guarantee 100 cores meets the draft requirements for 'windows 8 capable ' status .</tokentext>
<sentencetext>/proc/cpuinfo will become a small book.
on the bright side, i guarantee 100 cores meets the draft requirements for 'windows 8 capable' status.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870285</id>
	<title>Re:100?</title>
	<author>Anonymous</author>
	<datestamp>1256552100000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>It boils down to have much space you have on the die, which is usually square or a rectangle where width is twice the length. Perhaps it's 100 cores, and the cache and interconnects takes up about 28 times a core. Just a wild ass guess.</p></htmltext>
<tokenext>It boils down to have much space you have on the die , which is usually square or a rectangle where width is twice the length .
Perhaps it 's 100 cores , and the cache and interconnects takes up about 28 times a core .
Just a wild ass guess .</tokentext>
<sentencetext>It boils down to have much space you have on the die, which is usually square or a rectangle where width is twice the length.
Perhaps it's 100 cores, and the cache and interconnects takes up about 28 times a core.
Just a wild ass guess.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870247</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871925</id>
	<title>Re:This is great !</title>
	<author>Sancho</author>
	<datestamp>1256569440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I think that "Useless Use of cat" is funny.  I really do.  I go back and read it every once in a while just for grins.</p><p>But we're in the future, now.  Spawning that extra process isn't going to hurt anything.  Yeah, it's fun to poke at people who do silly things like that, but in reality, there's rarely harm in doing things this way.  Even if you're using a shell script which will run "cat file | grep" over and over, you're probably not going to start thrashing on a modern CPU.</p></htmltext>
<tokenext>I think that " Useless Use of cat " is funny .
I really do .
I go back and read it every once in a while just for grins.But we 're in the future , now .
Spawning that extra process is n't going to hurt anything .
Yeah , it 's fun to poke at people who do silly things like that , but in reality , there 's rarely harm in doing things this way .
Even if you 're using a shell script which will run " cat file | grep " over and over , you 're probably not going to start thrashing on a modern CPU .</tokentext>
<sentencetext>I think that "Useless Use of cat" is funny.
I really do.
I go back and read it every once in a while just for grins.But we're in the future, now.
Spawning that extra process isn't going to hurt anything.
Yeah, it's fun to poke at people who do silly things like that, but in reality, there's rarely harm in doing things this way.
Even if you're using a shell script which will run "cat file | grep" over and over, you're probably not going to start thrashing on a modern CPU.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870979</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870629</id>
	<title>Re:This is great !</title>
	<author>Anonymous</author>
	<datestamp>1256557560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Could you send the patch?</p></htmltext>
<tokenext>Could you send the patch ?</tokentext>
<sentencetext>Could you send the patch?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870149</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870299</id>
	<title>Re:Custom ISA?</title>
	<author>Linker3000</author>
	<datestamp>1256552400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>"...if the instruction set isn't any standard type..."</i></p><p>No problem; use the processor for a 'speak and spell'-type toy, a drug store reusable digital camera or a scientific calculator and someone will hack a decent Linux kernel onto it over a weekend.</p></htmltext>
<tokenext>" ...if the instruction set is n't any standard type... " No problem ; use the processor for a 'speak and spell'-type toy , a drug store reusable digital camera or a scientific calculator and someone will hack a decent Linux kernel onto it over a weekend .</tokentext>
<sentencetext>"...if the instruction set isn't any standard type..."No problem; use the processor for a 'speak and spell'-type toy, a drug store reusable digital camera or a scientific calculator and someone will hack a decent Linux kernel onto it over a weekend.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870111</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870879</id>
	<title>Re:Custom ISA?</title>
	<author>Narishma</author>
	<datestamp>1256561520000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>Why was this modded Informative? Can we have any links? Because both the article here as well as Wikipedia and an old Ars Technica story claim that it's based on MIPS.</p></htmltext>
<tokenext>Why was this modded Informative ?
Can we have any links ?
Because both the article here as well as Wikipedia and an old Ars Technica story claim that it 's based on MIPS .</tokentext>
<sentencetext>Why was this modded Informative?
Can we have any links?
Because both the article here as well as Wikipedia and an old Ars Technica story claim that it's based on MIPS.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870265</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870079</id>
	<title>imagine a beowulf...</title>
	<author>gandhi\_2</author>
	<datestamp>1256549460000</datestamp>
	<modclass>Redundant</modclass>
	<modscore>0</modscore>
	<htmltext><p>...cluster of natalie pormemes.</p></htmltext>
<tokenext>...cluster of natalie pormemes .</tokentext>
<sentencetext>...cluster of natalie pormemes.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29876641</id>
	<title>Re:100?</title>
	<author>wrongrook</author>
	<datestamp>1256548740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>The previous versions have had 36 and 64 cores arranged in squares.

The next power of 2 that is also a square would be 256 cores but this is probably getting a bit big.</htmltext>
<tokenext>The previous versions have had 36 and 64 cores arranged in squares .
The next power of 2 that is also a square would be 256 cores but this is probably getting a bit big .</tokentext>
<sentencetext>The previous versions have had 36 and 64 cores arranged in squares.
The next power of 2 that is also a square would be 256 cores but this is probably getting a bit big.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870247</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870765</id>
	<title>Re:Custom ISA?</title>
	<author>Locutus</author>
	<datestamp>1256559720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>good one. I browsed the article for what arch it was and was expecting ARM but didn't see it stated. ARM makes sense and the 40nm process has me wondering if it's Cortex a5 or a9 based.<br><br>how about those in some netbooks and a beowulf cluster of those?<nobr> <wbr></nobr>;-)<br><br>LoB</htmltext>
<tokenext>good one .
I browsed the article for what arch it was and was expecting ARM but did n't see it stated .
ARM makes sense and the 40nm process has me wondering if it 's Cortex a5 or a9 based.how about those in some netbooks and a beowulf cluster of those ?
; - ) LoB</tokentext>
<sentencetext>good one.
I browsed the article for what arch it was and was expecting ARM but didn't see it stated.
ARM makes sense and the 40nm process has me wondering if it's Cortex a5 or a9 based.how about those in some netbooks and a beowulf cluster of those?
;-)LoB</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870265</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871059</id>
	<title>Re:This is great !</title>
	<author>asaul</author>
	<datestamp>1256563500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>How many cores does it take to run a parallel algorithm?</p><p>100 - 1 to do the processing, 1 to fetch the data and 98 to calculate an efficient way to make the whole thing run in parallel.</p></htmltext>
<tokenext>How many cores does it take to run a parallel algorithm ? 100 - 1 to do the processing , 1 to fetch the data and 98 to calculate an efficient way to make the whole thing run in parallel .</tokentext>
<sentencetext>How many cores does it take to run a parallel algorithm?100 - 1 to do the processing, 1 to fetch the data and 98 to calculate an efficient way to make the whole thing run in parallel.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870311</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871773</id>
	<title>binary</title>
	<author>Anonymous</author>
	<datestamp>1256568720000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext><p>100-core is binary for quad core.</p></htmltext>
<tokenext>100-core is binary for quad core .</tokentext>
<sentencetext>100-core is binary for quad core.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870653</id>
	<title>Re:What happened to powers of 2?</title>
	<author>JasterBobaMereel</author>
	<datestamp>1256557980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>100 cores plus some room on the chip for management, connections, global cache etc<nobr> <wbr></nobr>....</p><p>Plus if you say 100 cores and put 128 cores on the chip then 28 can fail before you have to bin the chip as a dud<nobr> <wbr></nobr>....</p></htmltext>
<tokenext>100 cores plus some room on the chip for management , connections , global cache etc ....Plus if you say 100 cores and put 128 cores on the chip then 28 can fail before you have to bin the chip as a dud ... .</tokentext>
<sentencetext>100 cores plus some room on the chip for management, connections, global cache etc ....Plus if you say 100 cores and put 128 cores on the chip then 28 can fail before you have to bin the chip as a dud ....</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870387</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29877203</id>
	<title>why not go to the source?</title>
	<author>slew</author>
	<datestamp>1256550720000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>The <a href="http://www.tilera.com/products/TILE-Gx.php" title="tilera.com">company</a> [tilera.com] website claims...</p><p>
&nbsp; 64-bit VLIW processors with 64-bit instruction bundle<br>
&nbsp; 3-deep pipeline with up to 3 instructions per cycle</p><p>I don't know how this could be considered ARM or MIPS-derived...</p><p>A better description might have been <a href="http://www.linuxfordevices.com/c/a/News/64way-chip-gains-Linux-IDE-dev-cards-design-wins/" title="linuxfordevices.com">in this article</a> [linuxfordevices.com]...</p><blockquote><div><p>The Tile64 is based on a proprietary VLIW (very long instruction word) architecture, on which a MIPS-like RISC architecture is implemented in microcode. A hypervisor enables each core to run its own instance of Linux, or alternatively the whole chip can run Tilera's 64-way SMP (symmetrical multiprocessing) Linux implementation.</p></div></blockquote></div>
	</htmltext>
<tokenext>The company [ tilera.com ] website claims.. .   64-bit VLIW processors with 64-bit instruction bundle   3-deep pipeline with up to 3 instructions per cycleI do n't know how this could be considered ARM or MIPS-derived...A better description might have been in this article [ linuxfordevices.com ] ...The Tile64 is based on a proprietary VLIW ( very long instruction word ) architecture , on which a MIPS-like RISC architecture is implemented in microcode .
A hypervisor enables each core to run its own instance of Linux , or alternatively the whole chip can run Tilera 's 64-way SMP ( symmetrical multiprocessing ) Linux implementation .</tokentext>
<sentencetext>The company [tilera.com] website claims...
  64-bit VLIW processors with 64-bit instruction bundle
  3-deep pipeline with up to 3 instructions per cycleI don't know how this could be considered ARM or MIPS-derived...A better description might have been in this article [linuxfordevices.com]...The Tile64 is based on a proprietary VLIW (very long instruction word) architecture, on which a MIPS-like RISC architecture is implemented in microcode.
A hypervisor enables each core to run its own instance of Linux, or alternatively the whole chip can run Tilera's 64-way SMP (symmetrical multiprocessing) Linux implementation.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870447</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29872475</id>
	<title>Supplier  AAA Quality UGG 5325 Shoes,Diesel Jean</title>
	<author>Anonymous</author>
	<datestamp>1256572080000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><p>
&nbsp; &nbsp; &nbsp; &nbsp; Http://www.tntshoes.com</p><p>Hi friend, we are a prefession online store, you can</p><p>see more photos and price in our website which is</p><p>show in the photos<br>if you are interested please email me by , hellow we</p><p>have run a online shiping mall for many years, our</p><p>website is pls see our website in the photos attached</p><p>attached, we have all kinds brand new shoes,clothing,</p><p>handbag,sunglasses,hats etc for sale, 6000000\% best</p><p>quality with the amazing price. our website is pls see</p><p>our website in the photos attached attached, You will</p><p>find more pictures and the price for our product in our</p><p>website, please see below of the nike shoes we have,</p><p>we take paypal as payment, . shoes SB dunk $28-42</p><p>free shiping.</p><p>
&nbsp; OUR WEBSITE:</p><p>YAHOO:shoppertrade@yahoo.com.cn</p><p>MSN:shoppertrade@hotmail.com</p><p>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Http://www.tntshoes.com</p></htmltext>
<tokenext>        Http : //www.tntshoes.comHi friend , we are a prefession online store , you cansee more photos and price in our website which isshow in the photosif you are interested please email me by , hellow wehave run a online shiping mall for many years , ourwebsite is pls see our website in the photos attachedattached , we have all kinds brand new shoes,clothing,handbag,sunglasses,hats etc for sale , 6000000 \ % bestquality with the amazing price .
our website is pls seeour website in the photos attached attached , You willfind more pictures and the price for our product in ourwebsite , please see below of the nike shoes we have,we take paypal as payment , .
shoes SB dunk $ 28-42free shiping .
  OUR WEBSITE : YAHOO : shoppertrade @ yahoo.com.cnMSN : shoppertrade @ hotmail.com                                                                           Http : //www.tntshoes.com</tokentext>
<sentencetext>
        Http://www.tntshoes.comHi friend, we are a prefession online store, you cansee more photos and price in our website which isshow in the photosif you are interested please email me by , hellow wehave run a online shiping mall for many years, ourwebsite is pls see our website in the photos attachedattached, we have all kinds brand new shoes,clothing,handbag,sunglasses,hats etc for sale, 6000000\% bestquality with the amazing price.
our website is pls seeour website in the photos attached attached, You willfind more pictures and the price for our product in ourwebsite, please see below of the nike shoes we have,we take paypal as payment, .
shoes SB dunk $28-42free shiping.
  OUR WEBSITE:YAHOO:shoppertrade@yahoo.com.cnMSN:shoppertrade@hotmail.com
                                                                          Http://www.tntshoes.com</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870149</id>
	<title>Re:This is great !</title>
	<author>BadAnalogyGuy</author>
	<datestamp>1256550360000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p><i>By the way I just typed "make menuconfig" and it wiil let you enter a number up to 512 in the "Maximum number of CPUs" field, so the Linux kernel seems ready for up to 512 CPUs (or cores, they are handled the same way by Linux it seems) as far I can tell by this simple test. Entering a number greater than 512 gives the "You have made an invalid entry" message</i></p><p>Whoa. If you change the source a little, you can enter 1000000 into the Maximum number of CPUs field! Linux is ready for up to a million cores.</p><p>If you change the code a little more, when I enter a number that's too high for menuconfig, it says "We're not talking about your penis size, Holmes"</p></htmltext>
<tokenext>By the way I just typed " make menuconfig " and it wiil let you enter a number up to 512 in the " Maximum number of CPUs " field , so the Linux kernel seems ready for up to 512 CPUs ( or cores , they are handled the same way by Linux it seems ) as far I can tell by this simple test .
Entering a number greater than 512 gives the " You have made an invalid entry " messageWhoa .
If you change the source a little , you can enter 1000000 into the Maximum number of CPUs field !
Linux is ready for up to a million cores.If you change the code a little more , when I enter a number that 's too high for menuconfig , it says " We 're not talking about your penis size , Holmes "</tokentext>
<sentencetext>By the way I just typed "make menuconfig" and it wiil let you enter a number up to 512 in the "Maximum number of CPUs" field, so the Linux kernel seems ready for up to 512 CPUs (or cores, they are handled the same way by Linux it seems) as far I can tell by this simple test.
Entering a number greater than 512 gives the "You have made an invalid entry" messageWhoa.
If you change the source a little, you can enter 1000000 into the Maximum number of CPUs field!
Linux is ready for up to a million cores.If you change the code a little more, when I enter a number that's too high for menuconfig, it says "We're not talking about your penis size, Holmes"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870793</id>
	<title>Re:Am I *actually*...</title>
	<author>Anonymous</author>
	<datestamp>1256560080000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>It might run Crysis. Just<br>But to actually run Windows and Crysis and not need to kill IE first you might need 4 of these.</p></htmltext>
<tokenext>It might run Crysis .
JustBut to actually run Windows and Crysis and not need to kill IE first you might need 4 of these .</tokentext>
<sentencetext>It might run Crysis.
JustBut to actually run Windows and Crysis and not need to kill IE first you might need 4 of these.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870521</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871085</id>
	<title>Re:When does a CPU become the CPU?</title>
	<author>Anonymous</author>
	<datestamp>1256563800000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>This company is probably in its death bed. Engineering cannot save it, but business sense and market hype may. The chip is a viable bit of technology. The revolution in its design is based on the fact that it can have massive parallelism efficiently. Originally, however, the company was aiming at scientific computing. That is they were looking to replace clusters and similar things. The problem is that their chips were less capable of grining through the math. With several processors, however, taking a few more cycles to do each multiply is find if it can do dozens of them at the same time. It did not sell me as I went with a traditional cluster solution.</p><p>I am not sure if the new generation of chip solves the math prolem, but this tid bit sounds like they are just dodging it. The chip may be useful as some kind of VPN or HTTPS accelerator, but those already exist on the market.</p></htmltext>
<tokenext>This company is probably in its death bed .
Engineering can not save it , but business sense and market hype may .
The chip is a viable bit of technology .
The revolution in its design is based on the fact that it can have massive parallelism efficiently .
Originally , however , the company was aiming at scientific computing .
That is they were looking to replace clusters and similar things .
The problem is that their chips were less capable of grining through the math .
With several processors , however , taking a few more cycles to do each multiply is find if it can do dozens of them at the same time .
It did not sell me as I went with a traditional cluster solution.I am not sure if the new generation of chip solves the math prolem , but this tid bit sounds like they are just dodging it .
The chip may be useful as some kind of VPN or HTTPS accelerator , but those already exist on the market .</tokentext>
<sentencetext>This company is probably in its death bed.
Engineering cannot save it, but business sense and market hype may.
The chip is a viable bit of technology.
The revolution in its design is based on the fact that it can have massive parallelism efficiently.
Originally, however, the company was aiming at scientific computing.
That is they were looking to replace clusters and similar things.
The problem is that their chips were less capable of grining through the math.
With several processors, however, taking a few more cycles to do each multiply is find if it can do dozens of them at the same time.
It did not sell me as I went with a traditional cluster solution.I am not sure if the new generation of chip solves the math prolem, but this tid bit sounds like they are just dodging it.
The chip may be useful as some kind of VPN or HTTPS accelerator, but those already exist on the market.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870107</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29873819</id>
	<title>Re:Don't buy the hype</title>
	<author>rhsanborn</author>
	<datestamp>1256578860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>What about database access? Sun claims (in crappy marketing speak) to get some stellar performance out of MySQL, albeit with a special build to make it utilize the extra threads. A pain yes, very targeted, yes, but if you're running lots of simple requests, this might just be perfect for your application.</htmltext>
<tokenext>What about database access ?
Sun claims ( in crappy marketing speak ) to get some stellar performance out of MySQL , albeit with a special build to make it utilize the extra threads .
A pain yes , very targeted , yes , but if you 're running lots of simple requests , this might just be perfect for your application .</tokentext>
<sentencetext>What about database access?
Sun claims (in crappy marketing speak) to get some stellar performance out of MySQL, albeit with a special build to make it utilize the extra threads.
A pain yes, very targeted, yes, but if you're running lots of simple requests, this might just be perfect for your application.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871661</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870097</id>
	<title>Awfully generous with the term "core"</title>
	<author>Anonymous</author>
	<datestamp>1256549700000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Yes, I suppose technically any FPGA could be considered a "core" in its own right, but it's a far cry from the CPU cores that you typically associate with the term.</p><p>Putting a stock on a semi-automatic rifle makes it an "assault weapon", but c'mon. It's still a pea shooter.</p></htmltext>
<tokenext>Yes , I suppose technically any FPGA could be considered a " core " in its own right , but it 's a far cry from the CPU cores that you typically associate with the term.Putting a stock on a semi-automatic rifle makes it an " assault weapon " , but c'mon .
It 's still a pea shooter .</tokentext>
<sentencetext>Yes, I suppose technically any FPGA could be considered a "core" in its own right, but it's a far cry from the CPU cores that you typically associate with the term.Putting a stock on a semi-automatic rifle makes it an "assault weapon", but c'mon.
It's still a pea shooter.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29873853</id>
	<title>Re:When does a CPU become the CPU?</title>
	<author>Trieuvan</author>
	<datestamp>1256579040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>How does this fix the apps they ported being mostly IO bound in a lot of cases and 99\% of the cores will still just be eating out of their noses?</p></div><p>SSD</p></div>
	</htmltext>
<tokenext>How does this fix the apps they ported being mostly IO bound in a lot of cases and 99 \ % of the cores will still just be eating out of their noses ? SSD</tokentext>
<sentencetext>How does this fix the apps they ported being mostly IO bound in a lot of cases and 99\% of the cores will still just be eating out of their noses?SSD
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870107</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870297</id>
	<title>Re:Custom ISA?</title>
	<author>complete loony</author>
	<datestamp>1256552400000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>
1. LLVM backend<br>
2. Grand central<br>
3. ???<br>
4. Done.
</p><p>Seriously though, this is exactly what Apple have been working towards recently in the compiler space. You write your application and explicitly break up the algorythm into little tasks that can be executed in parallel. Using a syntax that is light weight and expressive. Then your compiler tool chain and runtime JIT manages the runtime threads and determines which processor is best equipped to run each task. It might run on the normal CPU, or it might run on the graphics card.</p></htmltext>
<tokenext>1 .
LLVM backend 2 .
Grand central 3 .
? ? ? 4 .
Done . Seriously though , this is exactly what Apple have been working towards recently in the compiler space .
You write your application and explicitly break up the algorythm into little tasks that can be executed in parallel .
Using a syntax that is light weight and expressive .
Then your compiler tool chain and runtime JIT manages the runtime threads and determines which processor is best equipped to run each task .
It might run on the normal CPU , or it might run on the graphics card .</tokentext>
<sentencetext>
1.
LLVM backend
2.
Grand central
3.
???
4.
Done.
Seriously though, this is exactly what Apple have been working towards recently in the compiler space.
You write your application and explicitly break up the algorythm into little tasks that can be executed in parallel.
Using a syntax that is light weight and expressive.
Then your compiler tool chain and runtime JIT manages the runtime threads and determines which processor is best equipped to run each task.
It might run on the normal CPU, or it might run on the graphics card.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870111</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870895</id>
	<title>Re:Custom ISA?</title>
	<author>Nursie</author>
	<datestamp>1256561580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>"Seriously though, this is exactly what Apple have been working towards recently in the compiler space. You write your application and explicitly break up the algorythm into little tasks that can be executed in parallel. Using a syntax that is light weight and expressive. Then your compiler tool chain and runtime JIT manages the runtime threads and determines which processor is best equipped to run each task."</i></p><p>AAAAAAAAHHHHH!!!! It's the iPod all over again! Apple did not invent the thread pool! I'm sure Grand central is great but FFS!</p><p><i>"Seriously though, this is exactly what <b>Software Engineers</b> have been working <b>with for years in the thread pool pattern</b>. You write your application and explicitly break up the algorithm into little tasks that can be executed in parallel. Using <b>the language of your choice</b>. Then your <b>Operating System</b> manages the runtime threads and determines which processor is best equipped to run each task.</i></p><p>FTFY. Thread pools are not new. Hell, I wrote a thread pool implementation 10 years ago and it wasn't new then.</p></htmltext>
<tokenext>" Seriously though , this is exactly what Apple have been working towards recently in the compiler space .
You write your application and explicitly break up the algorythm into little tasks that can be executed in parallel .
Using a syntax that is light weight and expressive .
Then your compiler tool chain and runtime JIT manages the runtime threads and determines which processor is best equipped to run each task. " AAAAAAAAHHHHH ! ! ! !
It 's the iPod all over again !
Apple did not invent the thread pool !
I 'm sure Grand central is great but FFS !
" Seriously though , this is exactly what Software Engineers have been working with for years in the thread pool pattern .
You write your application and explicitly break up the algorithm into little tasks that can be executed in parallel .
Using the language of your choice .
Then your Operating System manages the runtime threads and determines which processor is best equipped to run each task.FTFY .
Thread pools are not new .
Hell , I wrote a thread pool implementation 10 years ago and it was n't new then .</tokentext>
<sentencetext>"Seriously though, this is exactly what Apple have been working towards recently in the compiler space.
You write your application and explicitly break up the algorythm into little tasks that can be executed in parallel.
Using a syntax that is light weight and expressive.
Then your compiler tool chain and runtime JIT manages the runtime threads and determines which processor is best equipped to run each task."AAAAAAAAHHHHH!!!!
It's the iPod all over again!
Apple did not invent the thread pool!
I'm sure Grand central is great but FFS!
"Seriously though, this is exactly what Software Engineers have been working with for years in the thread pool pattern.
You write your application and explicitly break up the algorithm into little tasks that can be executed in parallel.
Using the language of your choice.
Then your Operating System manages the runtime threads and determines which processor is best equipped to run each task.FTFY.
Thread pools are not new.
Hell, I wrote a thread pool implementation 10 years ago and it wasn't new then.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870297</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063</id>
	<title>This is great !</title>
	<author>ls671</author>
	<datestamp>1256549220000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext><p>I can't wait to see the output of :</p><p>cat<nobr> <wbr></nobr>/proc/cpuinfo</p><p>I guess we will need to use:</p><p>cat<nobr> <wbr></nobr>/proc/cpuinfo | less</p><p>When we reach 1 million cores, we will need to rearrange the output of cat<nobr> <wbr></nobr>/proc/cpuinfo to eliminate redundant information<nobr> <wbr></nobr>;-))</p><p>By the way I just typed "make menuconfig" and it wiil let you enter a number up to 512 in the "Maximum number of CPUs" field, so the Linux kernel seems ready for up to 512 CPUs (or cores, they are handled the same way by Linux it seems) as far I can tell by this simple test. Entering a number greater than 512 gives the "You have made an invalid entry" message<nobr> <wbr></nobr>;-(</p><p>Note: You need to turn on "Support for big SMP systems with more than 8 CPUs" flag as well.</p><p>
&nbsp;</p></htmltext>
<tokenext>I ca n't wait to see the output of : cat /proc/cpuinfoI guess we will need to use : cat /proc/cpuinfo | lessWhen we reach 1 million cores , we will need to rearrange the output of cat /proc/cpuinfo to eliminate redundant information ; - ) ) By the way I just typed " make menuconfig " and it wiil let you enter a number up to 512 in the " Maximum number of CPUs " field , so the Linux kernel seems ready for up to 512 CPUs ( or cores , they are handled the same way by Linux it seems ) as far I can tell by this simple test .
Entering a number greater than 512 gives the " You have made an invalid entry " message ; - ( Note : You need to turn on " Support for big SMP systems with more than 8 CPUs " flag as well .
 </tokentext>
<sentencetext>I can't wait to see the output of :cat /proc/cpuinfoI guess we will need to use:cat /proc/cpuinfo | lessWhen we reach 1 million cores, we will need to rearrange the output of cat /proc/cpuinfo to eliminate redundant information ;-))By the way I just typed "make menuconfig" and it wiil let you enter a number up to 512 in the "Maximum number of CPUs" field, so the Linux kernel seems ready for up to 512 CPUs (or cores, they are handled the same way by Linux it seems) as far I can tell by this simple test.
Entering a number greater than 512 gives the "You have made an invalid entry" message ;-(Note: You need to turn on "Support for big SMP systems with more than 8 CPUs" flag as well.
 </sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29878187</id>
	<title>Re:asymmetric</title>
	<author>BikeHelmet</author>
	<datestamp>1256554800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Neither does, but it could be added to Linux.</p><p>It is, however, a monumental undertaking, since processes would have to be shifted <i>between architectures</i> while running. Unless, of course, you just design some programs to run on the massively parallel slower CPU, with no option of running on the faster one. Then there's no shifting, but you negate a lot of your benefit. And you could just as easily bundle two x86 CPUs on a board to get approximately the same effect, but with much less effort.</p></htmltext>
<tokenext>Neither does , but it could be added to Linux.It is , however , a monumental undertaking , since processes would have to be shifted between architectures while running .
Unless , of course , you just design some programs to run on the massively parallel slower CPU , with no option of running on the faster one .
Then there 's no shifting , but you negate a lot of your benefit .
And you could just as easily bundle two x86 CPUs on a board to get approximately the same effect , but with much less effort .</tokentext>
<sentencetext>Neither does, but it could be added to Linux.It is, however, a monumental undertaking, since processes would have to be shifted between architectures while running.
Unless, of course, you just design some programs to run on the massively parallel slower CPU, with no option of running on the faster one.
Then there's no shifting, but you negate a lot of your benefit.
And you could just as easily bundle two x86 CPUs on a board to get approximately the same effect, but with much less effort.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29872491</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870573</id>
	<title>Re:100?</title>
	<author>Anonymous</author>
	<datestamp>1256556540000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext>Their plan is to eventually confuse consumers by advertising "X KiloCores! (* KC = 1000 cores)" when everyone expects a KiloCore to be 1024 cores.</htmltext>
<tokenext>Their plan is to eventually confuse consumers by advertising " X KiloCores !
( * KC = 1000 cores ) " when everyone expects a KiloCore to be 1024 cores .</tokentext>
<sentencetext>Their plan is to eventually confuse consumers by advertising "X KiloCores!
(* KC = 1000 cores)" when everyone expects a KiloCore to be 1024 cores.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870247</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870135</id>
	<title>Re:This is great !</title>
	<author>Anonymous</author>
	<datestamp>1256550120000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext>The 'stock' kernel is ready for 512 cpu's. SGI had a 2048-core single image Linux kernel six years ago.</htmltext>
<tokenext>The 'stock ' kernel is ready for 512 cpu 's .
SGI had a 2048-core single image Linux kernel six years ago .</tokentext>
<sentencetext>The 'stock' kernel is ready for 512 cpu's.
SGI had a 2048-core single image Linux kernel six years ago.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870661</id>
	<title>Been there, done that, got the T-Shirt...</title>
	<author>Anonymous</author>
	<datestamp>1256558160000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext><p>OK, so big disclaimer:  I work for Sun (not Oracle, yet!)</p><p>The Sun Niagara T1 chip came out over 3 years ago, and it did 32 threads on 8 cores.<br>And drew something around 50W (200W for a fully-loaded server).  And under $4k.</p><p>The T2 systems came out last year, do 64 threads/CPU for a similar power budget. And even less $/thread.</p><p>The T3 systems likely will be out next year (I don't know specifically when, I'm not In The Know), and the threads/chip should double again, with little power increase.</p><p>Of course, per-thread performance isn't equal to anything like a modern "standard" CPU.  Though, it's now "good enough" for most stuff - the T2 systems have a per-thread performance equal to about the old Pentium3 chips.  I would be flabbergasted if this GX chip had a per-core performance anywhere near that.</p><p>I'm not sure how Intel's Larabee is going to show (it's still nowhere near release), but the T-seres chips from Sun are cheap, open, and available now. And they run Solaris AND Linux.  So unless this new GX chip is radically more efficient/higher-performance/less costly, I don't see this company making any impact.</p><p>-Erik</p></htmltext>
<tokenext>OK , so big disclaimer : I work for Sun ( not Oracle , yet !
) The Sun Niagara T1 chip came out over 3 years ago , and it did 32 threads on 8 cores.And drew something around 50W ( 200W for a fully-loaded server ) .
And under $ 4k.The T2 systems came out last year , do 64 threads/CPU for a similar power budget .
And even less $ /thread.The T3 systems likely will be out next year ( I do n't know specifically when , I 'm not In The Know ) , and the threads/chip should double again , with little power increase.Of course , per-thread performance is n't equal to anything like a modern " standard " CPU .
Though , it 's now " good enough " for most stuff - the T2 systems have a per-thread performance equal to about the old Pentium3 chips .
I would be flabbergasted if this GX chip had a per-core performance anywhere near that.I 'm not sure how Intel 's Larabee is going to show ( it 's still nowhere near release ) , but the T-seres chips from Sun are cheap , open , and available now .
And they run Solaris AND Linux .
So unless this new GX chip is radically more efficient/higher-performance/less costly , I do n't see this company making any impact.-Erik</tokentext>
<sentencetext>OK, so big disclaimer:  I work for Sun (not Oracle, yet!
)The Sun Niagara T1 chip came out over 3 years ago, and it did 32 threads on 8 cores.And drew something around 50W (200W for a fully-loaded server).
And under $4k.The T2 systems came out last year, do 64 threads/CPU for a similar power budget.
And even less $/thread.The T3 systems likely will be out next year (I don't know specifically when, I'm not In The Know), and the threads/chip should double again, with little power increase.Of course, per-thread performance isn't equal to anything like a modern "standard" CPU.
Though, it's now "good enough" for most stuff - the T2 systems have a per-thread performance equal to about the old Pentium3 chips.
I would be flabbergasted if this GX chip had a per-core performance anywhere near that.I'm not sure how Intel's Larabee is going to show (it's still nowhere near release), but the T-seres chips from Sun are cheap, open, and available now.
And they run Solaris AND Linux.
So unless this new GX chip is radically more efficient/higher-performance/less costly, I don't see this company making any impact.-Erik</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871121</id>
	<title>Re:Chips target tasks</title>
	<author>Skapare</author>
	<datestamp>1256564100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>An associative memory requirement could be better served by a custom high-core count, CPU<nobr> <wbr></nobr>... if it has sufficient memory on board (e.g. sufficient total memory bus bandwidth).</p></htmltext>
<tokenext>An associative memory requirement could be better served by a custom high-core count , CPU ... if it has sufficient memory on board ( e.g .
sufficient total memory bus bandwidth ) .</tokentext>
<sentencetext>An associative memory requirement could be better served by a custom high-core count, CPU ... if it has sufficient memory on board (e.g.
sufficient total memory bus bandwidth).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870775</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870345</id>
	<title>Sounds Like</title>
	<author>Nerdfest</author>
	<datestamp>1256553060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Sounds like something that might be useful in a video game console<nobr> <wbr></nobr>...</htmltext>
<tokenext>Sounds like something that might be useful in a video game console .. .</tokentext>
<sentencetext>Sounds like something that might be useful in a video game console ...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870111</id>
	<title>Custom ISA?</title>
	<author>Henriok</author>
	<datestamp>1256549820000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext>Massive amounts or cores are cool and all that, but if the instruction set isn't any standard type (ie x86, Sparc, ARM, PowerPC or MIPS) chances are that it won't see light outside highly customized applications. Sure, Linux will probably run it. Linux run on anything, but it won't be put in a regular computer other than as an accelerator of some sort, like GPUs which are massively multicore too. Intel's Larrabee though..</htmltext>
<tokenext>Massive amounts or cores are cool and all that , but if the instruction set is n't any standard type ( ie x86 , Sparc , ARM , PowerPC or MIPS ) chances are that it wo n't see light outside highly customized applications .
Sure , Linux will probably run it .
Linux run on anything , but it wo n't be put in a regular computer other than as an accelerator of some sort , like GPUs which are massively multicore too .
Intel 's Larrabee though. .</tokentext>
<sentencetext>Massive amounts or cores are cool and all that, but if the instruction set isn't any standard type (ie x86, Sparc, ARM, PowerPC or MIPS) chances are that it won't see light outside highly customized applications.
Sure, Linux will probably run it.
Linux run on anything, but it won't be put in a regular computer other than as an accelerator of some sort, like GPUs which are massively multicore too.
Intel's Larrabee though..</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871093</id>
	<title>Re:What happened to powers of 2?</title>
	<author>Skapare</author>
	<datestamp>1256563860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Where's the law that says the core layout, or even the die itself, has to be square?  Square, or nearly square, might be the most convenient for minimum paths and such.  Still, you need to have space somewhere for "between core" control circuits.  Even if you lay out the die in a nice square grid, you don't have to make each cell be a core.  Getting data lines into the cores in the middle can be an interesting challenge.  But then, 100 cores trying to load a word from different locations in RAM all at the same time might be a bit congested.  I'd suggest some internal RAM in place of some cores.</p></htmltext>
<tokenext>Where 's the law that says the core layout , or even the die itself , has to be square ?
Square , or nearly square , might be the most convenient for minimum paths and such .
Still , you need to have space somewhere for " between core " control circuits .
Even if you lay out the die in a nice square grid , you do n't have to make each cell be a core .
Getting data lines into the cores in the middle can be an interesting challenge .
But then , 100 cores trying to load a word from different locations in RAM all at the same time might be a bit congested .
I 'd suggest some internal RAM in place of some cores .</tokentext>
<sentencetext>Where's the law that says the core layout, or even the die itself, has to be square?
Square, or nearly square, might be the most convenient for minimum paths and such.
Still, you need to have space somewhere for "between core" control circuits.
Even if you lay out the die in a nice square grid, you don't have to make each cell be a core.
Getting data lines into the cores in the middle can be an interesting challenge.
But then, 100 cores trying to load a word from different locations in RAM all at the same time might be a bit congested.
I'd suggest some internal RAM in place of some cores.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870451</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29872995</id>
	<title>Really that easy? Don't think so.</title>
	<author>Henriok</author>
	<datestamp>1256574840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Relly? I did a quick Googlig too, and found nothing. There's certainly nothing of this sort to be found on their homepage, nor ARM's. I did a lengthy googling and found an Intel executive stating that it's ARM, but I also found an ArsTechnica article <a href="http://arstechnica.com/hardware/news/2007/08/MIT-startup-raises-multicore-bar-with-new-64-core-CPU.ars" title="arstechnica.com">http://arstechnica.com/hardware/news/2007/08/MIT-startup-raises-multicore-bar-with-new-64-core-CPU.ars</a> [arstechnica.com] stating that it's a MIPS derived VLIW architecture. After MIPS revealed itself as a candidate it was easy to find more information, and MIPS it is.</htmltext>
<tokenext>Relly ?
I did a quick Googlig too , and found nothing .
There 's certainly nothing of this sort to be found on their homepage , nor ARM 's .
I did a lengthy googling and found an Intel executive stating that it 's ARM , but I also found an ArsTechnica article http : //arstechnica.com/hardware/news/2007/08/MIT-startup-raises-multicore-bar-with-new-64-core-CPU.ars [ arstechnica.com ] stating that it 's a MIPS derived VLIW architecture .
After MIPS revealed itself as a candidate it was easy to find more information , and MIPS it is .</tokentext>
<sentencetext>Relly?
I did a quick Googlig too, and found nothing.
There's certainly nothing of this sort to be found on their homepage, nor ARM's.
I did a lengthy googling and found an Intel executive stating that it's ARM, but I also found an ArsTechnica article http://arstechnica.com/hardware/news/2007/08/MIT-startup-raises-multicore-bar-with-new-64-core-CPU.ars [arstechnica.com] stating that it's a MIPS derived VLIW architecture.
After MIPS revealed itself as a candidate it was easy to find more information, and MIPS it is.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870265</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870247</id>
	<title>100?</title>
	<author>Anonymous</author>
	<datestamp>1256551800000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Wouldn't it have been better to make it a power of 2?  Some work is more easily divided when you can just keep halving it. 64 or 128 would have been more logical I would have thought. I'm not an SMP programmer thought, so perhaps it doesn't make any difference.</p></htmltext>
<tokenext>Would n't it have been better to make it a power of 2 ?
Some work is more easily divided when you can just keep halving it .
64 or 128 would have been more logical I would have thought .
I 'm not an SMP programmer thought , so perhaps it does n't make any difference .</tokentext>
<sentencetext>Wouldn't it have been better to make it a power of 2?
Some work is more easily divided when you can just keep halving it.
64 or 128 would have been more logical I would have thought.
I'm not an SMP programmer thought, so perhaps it doesn't make any difference.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871191</id>
	<title>Re:Awfully generous with the term "core"</title>
	<author>Anonymous</author>
	<datestamp>1256564880000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>There's also another potential problem. All of these 100 "cores" share an extremely small amount of cache.</p><p><div class="quote"><p>32K L1i cache, 32K L1d cache, 256K L2 cache <b>per tile</b> </p></div></div>
	</htmltext>
<tokenext>There 's also another potential problem .
All of these 100 " cores " share an extremely small amount of cache.32K L1i cache , 32K L1d cache , 256K L2 cache per tile</tokentext>
<sentencetext>There's also another potential problem.
All of these 100 "cores" share an extremely small amount of cache.32K L1i cache, 32K L1d cache, 256K L2 cache per tile 
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870097</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29872553</id>
	<title>Re:This is great !</title>
	<author>amorsen</author>
	<datestamp>1256572440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You can't depend on less working with anything in<nobr> <wbr></nobr>/proc. What you really want is less &lt;<nobr> <wbr></nobr>/proc/cpuinfo</p></htmltext>
<tokenext>You ca n't depend on less working with anything in /proc .
What you really want is less /proc/cpuinfo</tokentext>
<sentencetext>You can't depend on less working with anything in /proc.
What you really want is less  /proc/cpuinfo</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870585</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29874303</id>
	<title>Re:When does a CPU become the CPU?</title>
	<author>sjames</author>
	<datestamp>1256581080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I can't speak to the I/O issue, since that seems like it would be a huge problem. As for the kernel issue, a driver in the kernel can be all you need. Open the device as a file (as usual) and then point it to the binary to be run and tell it to go. The native kernel on the CPU sees it all as just data moving through a device file as usual.</p><p>If I/O is required, the userspace program on the CPU will either have to perform the operations on the card's behalf or the card can have it's own I/O subsystem (unlikely).</p><p>I know there used to be a PDP-11 on a PCI card that used a strategy something like that so that the PDP's "disks" were files on the native PC's drive.</p></htmltext>
<tokenext>I ca n't speak to the I/O issue , since that seems like it would be a huge problem .
As for the kernel issue , a driver in the kernel can be all you need .
Open the device as a file ( as usual ) and then point it to the binary to be run and tell it to go .
The native kernel on the CPU sees it all as just data moving through a device file as usual.If I/O is required , the userspace program on the CPU will either have to perform the operations on the card 's behalf or the card can have it 's own I/O subsystem ( unlikely ) .I know there used to be a PDP-11 on a PCI card that used a strategy something like that so that the PDP 's " disks " were files on the native PC 's drive .</tokentext>
<sentencetext>I can't speak to the I/O issue, since that seems like it would be a huge problem.
As for the kernel issue, a driver in the kernel can be all you need.
Open the device as a file (as usual) and then point it to the binary to be run and tell it to go.
The native kernel on the CPU sees it all as just data moving through a device file as usual.If I/O is required, the userspace program on the CPU will either have to perform the operations on the card's behalf or the card can have it's own I/O subsystem (unlikely).I know there used to be a PDP-11 on a PCI card that used a strategy something like that so that the PDP's "disks" were files on the native PC's drive.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870107</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870979</id>
	<title>Re:This is great !</title>
	<author>TheRaven64</author>
	<datestamp>1256562720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>It's interesting that even in 2009 on a site for geeks, many people seem not to know about cat abuse and would still rather spawn two processes to do the job of one.</htmltext>
<tokenext>It 's interesting that even in 2009 on a site for geeks , many people seem not to know about cat abuse and would still rather spawn two processes to do the job of one .</tokentext>
<sentencetext>It's interesting that even in 2009 on a site for geeks, many people seem not to know about cat abuse and would still rather spawn two processes to do the job of one.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870615</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29882437</id>
	<title>Re:asymmetric</title>
	<author>TheBAFH</author>
	<datestamp>1256650440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Is there any way to flag slashdot comments as "possible future prior art"? It could be useful.<nobr> <wbr></nobr>:-)</p></htmltext>
<tokenext>Is there any way to flag slashdot comments as " possible future prior art " ?
It could be useful .
: - )</tokentext>
<sentencetext>Is there any way to flag slashdot comments as "possible future prior art"?
It could be useful.
:-)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29872491</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29872491</id>
	<title>asymmetric</title>
	<author>Anonymous</author>
	<datestamp>1256572140000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>It's been reported that these cores will be relatively underpowered, though both the total processing power and cost per watt will be quite impressive.  This makes the chip appropriate for putting in a server but not so much a desktop machine, where CPU-intensive single-threads may bog things down.</p><p>So what about one of these in combination with a 2-, 3- or 4-core AMD/Intel chip?  The serious threads can be run on the faster chip, while all the background stuff can be spread among the slower cores?  Does Windows have the ability to prioritize like that?  Does Linux?</p></htmltext>
<tokenext>It 's been reported that these cores will be relatively underpowered , though both the total processing power and cost per watt will be quite impressive .
This makes the chip appropriate for putting in a server but not so much a desktop machine , where CPU-intensive single-threads may bog things down.So what about one of these in combination with a 2- , 3- or 4-core AMD/Intel chip ?
The serious threads can be run on the faster chip , while all the background stuff can be spread among the slower cores ?
Does Windows have the ability to prioritize like that ?
Does Linux ?</tokentext>
<sentencetext>It's been reported that these cores will be relatively underpowered, though both the total processing power and cost per watt will be quite impressive.
This makes the chip appropriate for putting in a server but not so much a desktop machine, where CPU-intensive single-threads may bog things down.So what about one of these in combination with a 2-, 3- or 4-core AMD/Intel chip?
The serious threads can be run on the faster chip, while all the background stuff can be spread among the slower cores?
Does Windows have the ability to prioritize like that?
Does Linux?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871031</id>
	<title>Re:This is great !</title>
	<author>Anonymous</author>
	<datestamp>1256563320000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Get with the times - was doing that a year ago with psrinfo on a Sun T5240 (128 threads).  Have not got my hands on a T5440 yet though... 256 threads.</p></htmltext>
<tokenext>Get with the times - was doing that a year ago with psrinfo on a Sun T5240 ( 128 threads ) .
Have not got my hands on a T5440 yet though... 256 threads .</tokentext>
<sentencetext>Get with the times - was doing that a year ago with psrinfo on a Sun T5240 (128 threads).
Have not got my hands on a T5440 yet though... 256 threads.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871503</id>
	<title>Re:This is great !</title>
	<author>Anonymous</author>
	<datestamp>1256566980000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>OK, if one penguin for every core is displayed when booting... then your screen would be filled with hundreds of them, just like some antarctic islands!</p></htmltext>
<tokenext>OK , if one penguin for every core is displayed when booting... then your screen would be filled with hundreds of them , just like some antarctic islands !</tokentext>
<sentencetext>OK, if one penguin for every core is displayed when booting... then your screen would be filled with hundreds of them, just like some antarctic islands!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29872679</id>
	<title>Dancing Hamsters...</title>
	<author>jameskojiro</author>
	<datestamp>1256573100000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>It is like 100 Dancing Hamsters in your CPU.</p></htmltext>
<tokenext>It is like 100 Dancing Hamsters in your CPU .</tokentext>
<sentencetext>It is like 100 Dancing Hamsters in your CPU.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29884861</id>
	<title>Re:This is great !</title>
	<author>DaVince21</author>
	<datestamp>1256662980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I thought they upgraded that to 4096 with 2.6.30 (but it still can't display Flash video smoothly)?</p></htmltext>
<tokenext>I thought they upgraded that to 4096 with 2.6.30 ( but it still ca n't display Flash video smoothly ) ?</tokentext>
<sentencetext>I thought they upgraded that to 4096 with 2.6.30 (but it still can't display Flash video smoothly)?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870711</id>
	<title>Re:This is great !</title>
	<author>glgraca</author>
	<datestamp>1256558760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>When we reach 1 million cores, we'll probably be able to ask the computer what's on his mind...</p></htmltext>
<tokenext>When we reach 1 million cores , we 'll probably be able to ask the computer what 's on his mind.. .</tokentext>
<sentencetext>When we reach 1 million cores, we'll probably be able to ask the computer what's on his mind...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29878157</id>
	<title>Re:Awfully generous with the term "core"</title>
	<author>mako1138</author>
	<datestamp>1256554680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Wow, two bad analogies in one post?</p><p>This Tilera product doesn't look like an FPGA. Standard cell ASIC, maybe, but definitely not an FPGA.</p></htmltext>
<tokenext>Wow , two bad analogies in one post ? This Tilera product does n't look like an FPGA .
Standard cell ASIC , maybe , but definitely not an FPGA .</tokentext>
<sentencetext>Wow, two bad analogies in one post?This Tilera product doesn't look like an FPGA.
Standard cell ASIC, maybe, but definitely not an FPGA.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870097</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29875195</id>
	<title>Re:15-bladed shaving razor</title>
	<author>Thagg</author>
	<datestamp>1256585100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I do think that we are in the midst of a revolution in computing, where every application, every algorithm, every problem will be examined from the beginning on how it can best take advantage of hundreds if not thousands of 'cores'.  In my visual effects industry, it clearly dominates conversation and thought already, and I don't think we're more than a year or two in front of everybody else.</p><p>At a recent conference, NVidia showed a very useful almost-real-time global-illumination renderer, that worked best when it was running about 100,000 threads simultaneously.  Interestingly, the program didn't do any of the standard tricks to get exceptional perforance -- those tricks are hard, are fragile, have weird corner cases, and are just to be avoided if at all possible.  Doing relatively brute-force computation on scaldingly fast computers is a great alternative!</p><p>I predict that you will be using massively parallel programs soon.  Either you'll write them yourself, or you'll be using your competitors programs<nobr> <wbr></nobr>:)</p></htmltext>
<tokenext>I do think that we are in the midst of a revolution in computing , where every application , every algorithm , every problem will be examined from the beginning on how it can best take advantage of hundreds if not thousands of 'cores' .
In my visual effects industry , it clearly dominates conversation and thought already , and I do n't think we 're more than a year or two in front of everybody else.At a recent conference , NVidia showed a very useful almost-real-time global-illumination renderer , that worked best when it was running about 100,000 threads simultaneously .
Interestingly , the program did n't do any of the standard tricks to get exceptional perforance -- those tricks are hard , are fragile , have weird corner cases , and are just to be avoided if at all possible .
Doing relatively brute-force computation on scaldingly fast computers is a great alternative ! I predict that you will be using massively parallel programs soon .
Either you 'll write them yourself , or you 'll be using your competitors programs : )</tokentext>
<sentencetext>I do think that we are in the midst of a revolution in computing, where every application, every algorithm, every problem will be examined from the beginning on how it can best take advantage of hundreds if not thousands of 'cores'.
In my visual effects industry, it clearly dominates conversation and thought already, and I don't think we're more than a year or two in front of everybody else.At a recent conference, NVidia showed a very useful almost-real-time global-illumination renderer, that worked best when it was running about 100,000 threads simultaneously.
Interestingly, the program didn't do any of the standard tricks to get exceptional perforance -- those tricks are hard, are fragile, have weird corner cases, and are just to be avoided if at all possible.
Doing relatively brute-force computation on scaldingly fast computers is a great alternative!I predict that you will be using massively parallel programs soon.
Either you'll write them yourself, or you'll be using your competitors programs :)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871669</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870585</id>
	<title>Re:This is great !</title>
	<author>Anonymous</author>
	<datestamp>1256556720000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>cat<nobr> <wbr></nobr>/proc/cpuinfo | less</p></div><p>I guess you can also use</p><p>less<nobr> <wbr></nobr>/proc/cpuinfo</p></div>
	</htmltext>
<tokenext>cat /proc/cpuinfo | lessI guess you can also useless /proc/cpuinfo</tokentext>
<sentencetext>cat /proc/cpuinfo | lessI guess you can also useless /proc/cpuinfo
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870823</id>
	<title>Re:When does a CPU become the CPU?</title>
	<author>drspliff</author>
	<datestamp>1256560800000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p> <a href="http://www.channelregister.co.uk/2009/10/26/tilera\_third\_gen\_mesh\_chips/page2.html" title="channelregister.co.uk">The Register</a> [channelregister.co.uk] goes into more detail than this article, as usal.</p><blockquote><div><p>The Tile-Gx chips will run the Linux 2.6.26 kernel and add-on components that make it an operating system. Apache, PHP, and MySQL are being ported to the chips, and the programming tools will include the latest GCC compiler set. (Three years ago, Tilera had licensed SGI's MIPS-based C/C++ compilers for the Tile chips, which is why I think Tilera has also licensed some MIPS intellectual property to create its chip design, but the company has not discussed this.)</p></div></blockquote><p>So it seems pretty standard and they're using existing open &amp; closed source MIPS toolchains, however there's still "will" and "are being" in that sentence which brings a little unease...</p></div>
	</htmltext>
<tokenext>The Register [ channelregister.co.uk ] goes into more detail than this article , as usal.The Tile-Gx chips will run the Linux 2.6.26 kernel and add-on components that make it an operating system .
Apache , PHP , and MySQL are being ported to the chips , and the programming tools will include the latest GCC compiler set .
( Three years ago , Tilera had licensed SGI 's MIPS-based C/C + + compilers for the Tile chips , which is why I think Tilera has also licensed some MIPS intellectual property to create its chip design , but the company has not discussed this .
) So it seems pretty standard and they 're using existing open &amp; closed source MIPS toolchains , however there 's still " will " and " are being " in that sentence which brings a little unease.. .</tokentext>
<sentencetext> The Register [channelregister.co.uk] goes into more detail than this article, as usal.The Tile-Gx chips will run the Linux 2.6.26 kernel and add-on components that make it an operating system.
Apache, PHP, and MySQL are being ported to the chips, and the programming tools will include the latest GCC compiler set.
(Three years ago, Tilera had licensed SGI's MIPS-based C/C++ compilers for the Tile chips, which is why I think Tilera has also licensed some MIPS intellectual property to create its chip design, but the company has not discussed this.
)So it seems pretty standard and they're using existing open &amp; closed source MIPS toolchains, however there's still "will" and "are being" in that sentence which brings a little unease...
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870107</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871155</id>
	<title>And...</title>
	<author>travbrad</author>
	<datestamp>1256564580000</datestamp>
	<modclass>Redundant</modclass>
	<modscore>-1</modscore>
	<htmltext>it still can't run Crysis at 60FPS<nobr> <wbr></nobr>:p</htmltext>
<tokenext>it still ca n't run Crysis at 60FPS : p</tokentext>
<sentencetext>it still can't run Crysis at 60FPS :p</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29887063</id>
	<title>Re:This is great !</title>
	<author>Anonymous</author>
	<datestamp>1256672640000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>This thing's gonna run Duke Nukem Forever like a champ.</p></htmltext>
<tokenext>This thing 's gon na run Duke Nukem Forever like a champ .</tokentext>
<sentencetext>This thing's gonna run Duke Nukem Forever like a champ.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29872069</id>
	<title>Re:Custom ISA?</title>
	<author>Angostura</author>
	<datestamp>1256570220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>That's a coincidence, I was thinking that when you get to that may cores, you're effectively producing something akin to a VLIW processor, with each instruction handed to its own execution system.</p></htmltext>
<tokenext>That 's a coincidence , I was thinking that when you get to that may cores , you 're effectively producing something akin to a VLIW processor , with each instruction handed to its own execution system .</tokentext>
<sentencetext>That's a coincidence, I was thinking that when you get to that may cores, you're effectively producing something akin to a VLIW processor, with each instruction handed to its own execution system.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870447</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870775</id>
	<title>Chips target tasks</title>
	<author>Decameron81</author>
	<datestamp>1256559900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>The two-year-old startup's Tile-GX series of chips are targeted at servers and appliances that execute Web-related functions such as indexing, Web search and video search.</p></div></blockquote><p>
Can someone explain to me how a chip can be targetted at much higher-level tasks like these?
<br> <br>
I realize there are surely technical means to achieve this goal, I just can't imagine myself what these means could be.</p></div>
	</htmltext>
<tokenext>The two-year-old startup 's Tile-GX series of chips are targeted at servers and appliances that execute Web-related functions such as indexing , Web search and video search .
Can someone explain to me how a chip can be targetted at much higher-level tasks like these ?
I realize there are surely technical means to achieve this goal , I just ca n't imagine myself what these means could be .</tokentext>
<sentencetext>The two-year-old startup's Tile-GX series of chips are targeted at servers and appliances that execute Web-related functions such as indexing, Web search and video search.
Can someone explain to me how a chip can be targetted at much higher-level tasks like these?
I realize there are surely technical means to achieve this goal, I just can't imagine myself what these means could be.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871661</id>
	<title>Don't buy the hype</title>
	<author>Anonymous</author>
	<datestamp>1256568060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I've been personally let down time after time by systems that make these claims. I know it's a bit different, but Sun's T2/T2+ chips have been disappointing. Sure psrinfo shows 128 CPUs, but overall performance sucks for anything more than web serving. Sure, the kernel may be thread-aware, but the underlying parts of the OS aren't... Plus, the binutils and misc utilities that comprise day-to-day tasks don't take advantage of that many execution threads... You have to get special gzip that is parallelized.</p><p>I'll withhold judgement until I see some benchmarks in real world scenarios.</p></htmltext>
<tokenext>I 've been personally let down time after time by systems that make these claims .
I know it 's a bit different , but Sun 's T2/T2 + chips have been disappointing .
Sure psrinfo shows 128 CPUs , but overall performance sucks for anything more than web serving .
Sure , the kernel may be thread-aware , but the underlying parts of the OS are n't... Plus , the binutils and misc utilities that comprise day-to-day tasks do n't take advantage of that many execution threads... You have to get special gzip that is parallelized.I 'll withhold judgement until I see some benchmarks in real world scenarios .</tokentext>
<sentencetext>I've been personally let down time after time by systems that make these claims.
I know it's a bit different, but Sun's T2/T2+ chips have been disappointing.
Sure psrinfo shows 128 CPUs, but overall performance sucks for anything more than web serving.
Sure, the kernel may be thread-aware, but the underlying parts of the OS aren't... Plus, the binutils and misc utilities that comprise day-to-day tasks don't take advantage of that many execution threads... You have to get special gzip that is parallelized.I'll withhold judgement until I see some benchmarks in real world scenarios.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870289</id>
	<title>Re:100?</title>
	<author>Fotograf</author>
	<datestamp>1256552220000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext>it does if you are carefully starting applications in power of two and designing your applications to use power of two threads.</htmltext>
<tokenext>it does if you are carefully starting applications in power of two and designing your applications to use power of two threads .</tokentext>
<sentencetext>it does if you are carefully starting applications in power of two and designing your applications to use power of two threads.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870247</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29872953</id>
	<title>Re:This is great !</title>
	<author>Anonymous</author>
	<datestamp>1256574600000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Stop abusing your cat: less<nobr> <wbr></nobr>/proc/cpuinfo</p></htmltext>
<tokenext>Stop abusing your cat : less /proc/cpuinfo</tokentext>
<sentencetext>Stop abusing your cat: less /proc/cpuinfo</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871441</id>
	<title>Makes me glad I've been learnig Clojure</title>
	<author>Paul Fernhout</author>
	<datestamp>1256566680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Clojure is a lisp on the JVM designed for multi-threading. From:<br>
&nbsp; &nbsp; <a href="http://clojure.org/" title="clojure.org">http://clojure.org/</a> [clojure.org]<br>"""<br>Clojure is a dynamic programming language that targets the Java Virtual Machine (and the CLR ). It is designed to be a general-purpose language, combining the approachability and interactive development of a scripting language with an efficient and robust infrastructure for multithreaded programming. Clojure is a compiled language - it compiles directly to JVM bytecode, yet remains completely dynamic. Every feature supported by Clojure is supported at runtime. Clojure provides easy access to the Java frameworks, with optional type hints and type inference, to ensure that calls to Java can avoid reflection. Clojure is a dialect of Lisp, and shares with Lisp the code-as-data philosophy and a powerful macro system. Clojure is predominantly a functional programming language, and features a rich set of immutable, persistent data structures. When mutable state is needed, Clojure offers a software transactional memory system and reactive Agent system that ensure clean, correct, multithreaded designs.<br>"""</p></htmltext>
<tokenext>Clojure is a lisp on the JVM designed for multi-threading .
From :     http : //clojure.org/ [ clojure.org ] " " " Clojure is a dynamic programming language that targets the Java Virtual Machine ( and the CLR ) .
It is designed to be a general-purpose language , combining the approachability and interactive development of a scripting language with an efficient and robust infrastructure for multithreaded programming .
Clojure is a compiled language - it compiles directly to JVM bytecode , yet remains completely dynamic .
Every feature supported by Clojure is supported at runtime .
Clojure provides easy access to the Java frameworks , with optional type hints and type inference , to ensure that calls to Java can avoid reflection .
Clojure is a dialect of Lisp , and shares with Lisp the code-as-data philosophy and a powerful macro system .
Clojure is predominantly a functional programming language , and features a rich set of immutable , persistent data structures .
When mutable state is needed , Clojure offers a software transactional memory system and reactive Agent system that ensure clean , correct , multithreaded designs .
" " "</tokentext>
<sentencetext>Clojure is a lisp on the JVM designed for multi-threading.
From:
    http://clojure.org/ [clojure.org]"""Clojure is a dynamic programming language that targets the Java Virtual Machine (and the CLR ).
It is designed to be a general-purpose language, combining the approachability and interactive development of a scripting language with an efficient and robust infrastructure for multithreaded programming.
Clojure is a compiled language - it compiles directly to JVM bytecode, yet remains completely dynamic.
Every feature supported by Clojure is supported at runtime.
Clojure provides easy access to the Java frameworks, with optional type hints and type inference, to ensure that calls to Java can avoid reflection.
Clojure is a dialect of Lisp, and shares with Lisp the code-as-data philosophy and a powerful macro system.
Clojure is predominantly a functional programming language, and features a rich set of immutable, persistent data structures.
When mutable state is needed, Clojure offers a software transactional memory system and reactive Agent system that ensure clean, correct, multithreaded designs.
"""</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871581</id>
	<title>Re:Custom ISA?</title>
	<author>V!NCENT</author>
	<datestamp>1256567580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Yes it's the iPod all over again: nothing new but done right for the first time. \_'</p></htmltext>
<tokenext>Yes it 's the iPod all over again : nothing new but done right for the first time .
\ _'</tokentext>
<sentencetext>Yes it's the iPod all over again: nothing new but done right for the first time.
\_'</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870895</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870091</id>
	<title>OOOoooo!  BABY LIGHT MY FIRE !!</title>
	<author>Anonymous</author>
	<datestamp>1256549640000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Yeah, baby !!  That's a LOT OF POWER to turn my knobs !!</p></htmltext>
<tokenext>Yeah , baby ! !
That 's a LOT OF POWER to turn my knobs !
!</tokentext>
<sentencetext>Yeah, baby !!
That's a LOT OF POWER to turn my knobs !
!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870387</id>
	<title>What happened to powers of 2?</title>
	<author>Godefricus</author>
	<datestamp>1256553660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>.. I'm I the only one who gets mildly suspicious when reading 100-core instead of 128-core?</p></htmltext>
<tokenext>.. I 'm I the only one who gets mildly suspicious when reading 100-core instead of 128-core ?</tokentext>
<sentencetext>.. I'm I the only one who gets mildly suspicious when reading 100-core instead of 128-core?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870327</id>
	<title>Re:100?</title>
	<author>harry666t</author>
	<datestamp>1256552880000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>SMP FAQ.</p><p>Q: Does the number of processors in a SMP system need to be a power of two/divisible by two?</p><p>A: No.</p><p>Q: Does the number of processors in a SMP system...</p><p>A: Any number of CPUs/cores that is larger than one will make the system an SMP system*.</p><p>(* except when it's an asymmetrical architecture)</p><p>Q: How do these patterns (power of 2, divisible by 2, etc) of numbers of cores affect performance?</p><p>A: Performance depends on the architecture of the system. You cannot judge by simply looking at the number of cores, just as you can't simply look at MHz.</p></htmltext>
<tokenext>SMP FAQ.Q : Does the number of processors in a SMP system need to be a power of two/divisible by two ? A : No.Q : Does the number of processors in a SMP system...A : Any number of CPUs/cores that is larger than one will make the system an SMP system * .
( * except when it 's an asymmetrical architecture ) Q : How do these patterns ( power of 2 , divisible by 2 , etc ) of numbers of cores affect performance ? A : Performance depends on the architecture of the system .
You can not judge by simply looking at the number of cores , just as you ca n't simply look at MHz .</tokentext>
<sentencetext>SMP FAQ.Q: Does the number of processors in a SMP system need to be a power of two/divisible by two?A: No.Q: Does the number of processors in a SMP system...A: Any number of CPUs/cores that is larger than one will make the system an SMP system*.
(* except when it's an asymmetrical architecture)Q: How do these patterns (power of 2, divisible by 2, etc) of numbers of cores affect performance?A: Performance depends on the architecture of the system.
You cannot judge by simply looking at the number of cores, just as you can't simply look at MHz.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870247</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29873941</id>
	<title>Re:Chips target tasks</title>
	<author>Anonymous</author>
	<datestamp>1256579460000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><i>Can someone explain to me how a chip can be targetted at much higher-level tasks like these?</i></p><p>Technically, it is more about what it is not targeting.</p><p>By removing some general purpose SMP goals, they can squeeze a lot more power for these loosely coupled, data parallel tasks into a chip.  Everything in that list is something that can be parallelized into nice small chunks with mostly private intermediate data.  For these apps, you can define pipelines and data-flow solutions which map easily onto the Tilera architecture.  From the beginning, Tilera focused on the development tools needed to design such data-flow apps.</p><p>They've been targeting these markets because they are approachable for parallelism and approachable from an engineering perspective: embedded server and appliance markets are more accepting of alternative, low cost designs since they do not have as much concern with long-term platform stability for general purpose workload.  You can do things like size a wire-speed encoding/decoding or pattern-matching workload and define a static data-flow solution that places certain processing steps on each core in the tiled CPU array, based on adjacency and message-passing.  It is not just a general purpose OS managing all the cores as symmetric shared-memory processors.</p></htmltext>
<tokenext>Can someone explain to me how a chip can be targetted at much higher-level tasks like these ? Technically , it is more about what it is not targeting.By removing some general purpose SMP goals , they can squeeze a lot more power for these loosely coupled , data parallel tasks into a chip .
Everything in that list is something that can be parallelized into nice small chunks with mostly private intermediate data .
For these apps , you can define pipelines and data-flow solutions which map easily onto the Tilera architecture .
From the beginning , Tilera focused on the development tools needed to design such data-flow apps.They 've been targeting these markets because they are approachable for parallelism and approachable from an engineering perspective : embedded server and appliance markets are more accepting of alternative , low cost designs since they do not have as much concern with long-term platform stability for general purpose workload .
You can do things like size a wire-speed encoding/decoding or pattern-matching workload and define a static data-flow solution that places certain processing steps on each core in the tiled CPU array , based on adjacency and message-passing .
It is not just a general purpose OS managing all the cores as symmetric shared-memory processors .</tokentext>
<sentencetext>Can someone explain to me how a chip can be targetted at much higher-level tasks like these?Technically, it is more about what it is not targeting.By removing some general purpose SMP goals, they can squeeze a lot more power for these loosely coupled, data parallel tasks into a chip.
Everything in that list is something that can be parallelized into nice small chunks with mostly private intermediate data.
For these apps, you can define pipelines and data-flow solutions which map easily onto the Tilera architecture.
From the beginning, Tilera focused on the development tools needed to design such data-flow apps.They've been targeting these markets because they are approachable for parallelism and approachable from an engineering perspective: embedded server and appliance markets are more accepting of alternative, low cost designs since they do not have as much concern with long-term platform stability for general purpose workload.
You can do things like size a wire-speed encoding/decoding or pattern-matching workload and define a static data-flow solution that places certain processing steps on each core in the tiled CPU array, based on adjacency and message-passing.
It is not just a general purpose OS managing all the cores as symmetric shared-memory processors.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870775</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870521</id>
	<title>Am I *actually*...</title>
	<author>Anonymous</author>
	<datestamp>1256555400000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>...the first person to ask if this can run "Crysis?"</p></htmltext>
<tokenext>...the first person to ask if this can run " Crysis ?
"</tokentext>
<sentencetext>...the first person to ask if this can run "Crysis?
"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870535</id>
	<title>Re:This is great !</title>
	<author>Anonymous</author>
	<datestamp>1256555580000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>The information in cpuinfo is only redundant like that on x86/amd64...<br>On Sparc or Alpha, you get a single block of text where one of the fields means "number of cpus", example:</p><p>cpu        : TI UltraSparc IIi (Sabre)<br>fpu        : UltraSparc IIi integrated FPU<br>prom        : OBP 3.10.25 2000/01/17 21:26<br>type        : sun4u<br>ncpus probed    : 1<br>ncpus active    : 1<br>D$ parity tl1    : 0<br>I$ parity tl1    : 0<br>Cpu0Bogo    : 880.38<br>Cpu0ClkTck    : 000000001a3a4eab<br>MMU Type    : Spitfire</p><p>number of cpus active and number of cpus probed (includes any which are inactive)... a million cpus wouldn't present a problem here.</p></htmltext>
<tokenext>The information in cpuinfo is only redundant like that on x86/amd64...On Sparc or Alpha , you get a single block of text where one of the fields means " number of cpus " , example : cpu : TI UltraSparc IIi ( Sabre ) fpu : UltraSparc IIi integrated FPUprom : OBP 3.10.25 2000/01/17 21 : 26type : sun4uncpus probed : 1ncpus active : 1D $ parity tl1 : 0I $ parity tl1 : 0Cpu0Bogo : 880.38Cpu0ClkTck : 000000001a3a4eabMMU Type : Spitfirenumber of cpus active and number of cpus probed ( includes any which are inactive ) ... a million cpus would n't present a problem here .</tokentext>
<sentencetext>The information in cpuinfo is only redundant like that on x86/amd64...On Sparc or Alpha, you get a single block of text where one of the fields means "number of cpus", example:cpu        : TI UltraSparc IIi (Sabre)fpu        : UltraSparc IIi integrated FPUprom        : OBP 3.10.25 2000/01/17 21:26type        : sun4uncpus probed    : 1ncpus active    : 1D$ parity tl1    : 0I$ parity tl1    : 0Cpu0Bogo    : 880.38Cpu0ClkTck    : 000000001a3a4eabMMU Type    : Spitfirenumber of cpus active and number of cpus probed (includes any which are inactive)... a million cpus wouldn't present a problem here.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870107</id>
	<title>When does a CPU become the CPU?</title>
	<author>LaurensVH</author>
	<datestamp>1256549760000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext>It appears from the article that it's a new, separate architecture to which the kernel hasn't been ported yet, so these are add-on processors that can help reduce the load on the actual CPU, at least for now.

So, em, two things:

1. How exactly does that work without kernel level support? They claimed having ported separate apps (MySQL, memcached, Apache), so this might suggest a generic kernel interface and userspace scheduling.
2. How does this fix the apps they ported being mostly IO bound in a lot of cases and 99\% of the cores will still just be eating out of their noses?</htmltext>
<tokenext>It appears from the article that it 's a new , separate architecture to which the kernel has n't been ported yet , so these are add-on processors that can help reduce the load on the actual CPU , at least for now .
So , em , two things : 1 .
How exactly does that work without kernel level support ?
They claimed having ported separate apps ( MySQL , memcached , Apache ) , so this might suggest a generic kernel interface and userspace scheduling .
2. How does this fix the apps they ported being mostly IO bound in a lot of cases and 99 \ % of the cores will still just be eating out of their noses ?</tokentext>
<sentencetext>It appears from the article that it's a new, separate architecture to which the kernel hasn't been ported yet, so these are add-on processors that can help reduce the load on the actual CPU, at least for now.
So, em, two things:

1.
How exactly does that work without kernel level support?
They claimed having ported separate apps (MySQL, memcached, Apache), so this might suggest a generic kernel interface and userspace scheduling.
2. How does this fix the apps they ported being mostly IO bound in a lot of cases and 99\% of the cores will still just be eating out of their noses?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870311</id>
	<title>Re:This is great !</title>
	<author>Trepidity</author>
	<datestamp>1256552700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>And if you change the code a little more, it takes single-threaded tasks and automatically finds an efficient parallelization of them, distributing the work out to those million cores!</p></htmltext>
<tokenext>And if you change the code a little more , it takes single-threaded tasks and automatically finds an efficient parallelization of them , distributing the work out to those million cores !</tokentext>
<sentencetext>And if you change the code a little more, it takes single-threaded tasks and automatically finds an efficient parallelization of them, distributing the work out to those million cores!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870149</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870615</id>
	<title>Re:This is great !</title>
	<author>1s44c</author>
	<datestamp>1256557320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>cat<nobr> <wbr></nobr>/proc/cpuinfo | less</p></div><p>That gets modded interesting these days? The use of a pipe?</p><p>If that's not too basic to be considered interesting then moderators have got a odd idea about what interesting actually means.</p></div>
	</htmltext>
<tokenext>cat /proc/cpuinfo | lessThat gets modded interesting these days ?
The use of a pipe ? If that 's not too basic to be considered interesting then moderators have got a odd idea about what interesting actually means .</tokentext>
<sentencetext>cat /proc/cpuinfo | lessThat gets modded interesting these days?
The use of a pipe?If that's not too basic to be considered interesting then moderators have got a odd idea about what interesting actually means.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29872021</id>
	<title>Resource sharing?</title>
	<author>arugulatarsus</author>
	<datestamp>1256570040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Any news on how the busses will be shared? This is an issue that most CPU manufacturers will look away from. Remember FB-DDRram? I can actually imagine an arbitrator bigger than the CPU in this multi-core architecture. You need something to help it scale. <br>
To explain my point a bit better: Imaging you have 100 computer all hooked up to a 10 / 100 hub (not switch ) and every computer has a bit torrent client opened. Same thing with the CPU and most modern buses. Your potential lag time to the bus is 99 other CPUs doing their shtick.
<br>
In TFA they mention blocks sharing switch points. Does that mean people will be encouraged to set affinities for data locality? Consider me to be an old fart, but I really would like some real world junk thrown at this or disclosure on the design.</htmltext>
<tokenext>Any news on how the busses will be shared ?
This is an issue that most CPU manufacturers will look away from .
Remember FB-DDRram ?
I can actually imagine an arbitrator bigger than the CPU in this multi-core architecture .
You need something to help it scale .
To explain my point a bit better : Imaging you have 100 computer all hooked up to a 10 / 100 hub ( not switch ) and every computer has a bit torrent client opened .
Same thing with the CPU and most modern buses .
Your potential lag time to the bus is 99 other CPUs doing their shtick .
In TFA they mention blocks sharing switch points .
Does that mean people will be encouraged to set affinities for data locality ?
Consider me to be an old fart , but I really would like some real world junk thrown at this or disclosure on the design .</tokentext>
<sentencetext>Any news on how the busses will be shared?
This is an issue that most CPU manufacturers will look away from.
Remember FB-DDRram?
I can actually imagine an arbitrator bigger than the CPU in this multi-core architecture.
You need something to help it scale.
To explain my point a bit better: Imaging you have 100 computer all hooked up to a 10 / 100 hub (not switch ) and every computer has a bit torrent client opened.
Same thing with the CPU and most modern buses.
Your potential lag time to the bus is 99 other CPUs doing their shtick.
In TFA they mention blocks sharing switch points.
Does that mean people will be encouraged to set affinities for data locality?
Consider me to be an old fart, but I really would like some real world junk thrown at this or disclosure on the design.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870447</id>
	<title>Re:Custom ISA?</title>
	<author>ForeverFaithless</author>
	<datestamp>1256554260000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext>Wikipedia <a href="http://en.wikipedia.org/wiki/TILE64" title="wikipedia.org" rel="nofollow">claims</a> [wikipedia.org] it's a MIPS-derived VLIW instruction set.</htmltext>
<tokenext>Wikipedia claims [ wikipedia.org ] it 's a MIPS-derived VLIW instruction set .</tokentext>
<sentencetext>Wikipedia claims [wikipedia.org] it's a MIPS-derived VLIW instruction set.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870265</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870265</id>
	<title>Re:Custom ISA?</title>
	<author>stiggle</author>
	<datestamp>1256551920000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>From a quick Google - its based on the ARM core (easily licensable cpu core)</p></htmltext>
<tokenext>From a quick Google - its based on the ARM core ( easily licensable cpu core )</tokentext>
<sentencetext>From a quick Google - its based on the ARM core (easily licensable cpu core)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870111</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870451</id>
	<title>Re:What happened to powers of 2?</title>
	<author>Anonymous</author>
	<datestamp>1256554320000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>I assume not, but it's a silly response.  Personally, I find 128-cores strange, since you can't lay them out on a square die like you can 100 (10x10) or 64 (8x8).</htmltext>
<tokenext>I assume not , but it 's a silly response .
Personally , I find 128-cores strange , since you ca n't lay them out on a square die like you can 100 ( 10x10 ) or 64 ( 8x8 ) .</tokentext>
<sentencetext>I assume not, but it's a silly response.
Personally, I find 128-cores strange, since you can't lay them out on a square die like you can 100 (10x10) or 64 (8x8).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870387</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870241</id>
	<title>55 Watts!</title>
	<author>conureman</author>
	<datestamp>1256551680000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><p>I guess I gotta RTFA; Man it's past my bedtime.</p></htmltext>
<tokenext>I guess I got ta RTFA ; Man it 's past my bedtime .</tokentext>
<sentencetext>I guess I gotta RTFA; Man it's past my bedtime.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29878131</id>
	<title>Re:Been there, done that, got the T-Shirt...</title>
	<author>BikeHelmet</author>
	<datestamp>1256554560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Tip: Don't sign your name when posting anonymously.<nobr> <wbr></nobr>:P</p></htmltext>
<tokenext>Tip : Do n't sign your name when posting anonymously .
: P</tokentext>
<sentencetext>Tip: Don't sign your name when posting anonymously.
:P</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870661</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870763</id>
	<title>It would be clever</title>
	<author>Anonymous</author>
	<datestamp>1256559720000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Since a) developing a processor is insanely expensive and b) they need it to run lots of software ASAP, it would be very clever if they spent a marginal part of the overall development costs in making sure every key Linux and *BSD kernel developer gets some hardware they can use to port the stuff over. Make it a nice desktop workstation with cool graphics and it will happen even faster.</p><p>They are going up against Intel... The traditional approach (delivering a faster processor with a better power consumption at a lower price) simply will not work here.</p><p>I think Movidis taught us a lesson a couple years back. Users will not move away from x86 for anything less than a spectacular improvement. Even the Niagara SPARC servers are a hard sell these days...</p></htmltext>
<tokenext>Since a ) developing a processor is insanely expensive and b ) they need it to run lots of software ASAP , it would be very clever if they spent a marginal part of the overall development costs in making sure every key Linux and * BSD kernel developer gets some hardware they can use to port the stuff over .
Make it a nice desktop workstation with cool graphics and it will happen even faster.They are going up against Intel... The traditional approach ( delivering a faster processor with a better power consumption at a lower price ) simply will not work here.I think Movidis taught us a lesson a couple years back .
Users will not move away from x86 for anything less than a spectacular improvement .
Even the Niagara SPARC servers are a hard sell these days.. .</tokentext>
<sentencetext>Since a) developing a processor is insanely expensive and b) they need it to run lots of software ASAP, it would be very clever if they spent a marginal part of the overall development costs in making sure every key Linux and *BSD kernel developer gets some hardware they can use to port the stuff over.
Make it a nice desktop workstation with cool graphics and it will happen even faster.They are going up against Intel... The traditional approach (delivering a faster processor with a better power consumption at a lower price) simply will not work here.I think Movidis taught us a lesson a couple years back.
Users will not move away from x86 for anything less than a spectacular improvement.
Even the Niagara SPARC servers are a hard sell these days...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871009</id>
	<title>Re:100?</title>
	<author>TheRaven64</author>
	<datestamp>1256563080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>It doesn't need to be a power of two, but being a square number helps for this kind of design because you want a regular arrangement that can fit into a regular grid on the die.</htmltext>
<tokenext>It does n't need to be a power of two , but being a square number helps for this kind of design because you want a regular arrangement that can fit into a regular grid on the die .</tokentext>
<sentencetext>It doesn't need to be a power of two, but being a square number helps for this kind of design because you want a regular arrangement that can fit into a regular grid on the die.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870247</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870255</id>
	<title>crossbars</title>
	<author>Anonymous</author>
	<datestamp>1256551800000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>in the article it is mentioned that Tilera is able to avoid the use of crossbars:</p><p>For faster data exchange, Tilera has organized parallelized cores in a square with multiple points to receive and transfer data. Each core has a switch for faster data exchange. Chips from Intel and AMD rely on crossbars, but as the number of cores expands, the design could potentially cause a gridlock that could lead to bandwidth issues, he said.</p><p>Does anybody here know how this actually works?</p></htmltext>
<tokenext>in the article it is mentioned that Tilera is able to avoid the use of crossbars : For faster data exchange , Tilera has organized parallelized cores in a square with multiple points to receive and transfer data .
Each core has a switch for faster data exchange .
Chips from Intel and AMD rely on crossbars , but as the number of cores expands , the design could potentially cause a gridlock that could lead to bandwidth issues , he said.Does anybody here know how this actually works ?</tokentext>
<sentencetext>in the article it is mentioned that Tilera is able to avoid the use of crossbars:For faster data exchange, Tilera has organized parallelized cores in a square with multiple points to receive and transfer data.
Each core has a switch for faster data exchange.
Chips from Intel and AMD rely on crossbars, but as the number of cores expands, the design could potentially cause a gridlock that could lead to bandwidth issues, he said.Does anybody here know how this actually works?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871239</id>
	<title>Re:This is great !</title>
	<author>tomhath</author>
	<datestamp>1256565180000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>Whoa. If you change the source a little, you can enter 1000000 into the Maximum number of CPUs field! Linux is ready for up to a million cores.</p></div><p>640K cores is more than anyone will ever need.</p></div>
	</htmltext>
<tokenext>Whoa .
If you change the source a little , you can enter 1000000 into the Maximum number of CPUs field !
Linux is ready for up to a million cores.640K cores is more than anyone will ever need .</tokentext>
<sentencetext>Whoa.
If you change the source a little, you can enter 1000000 into the Maximum number of CPUs field!
Linux is ready for up to a million cores.640K cores is more than anyone will ever need.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870149</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870553</id>
	<title>Re:Custom ISA?</title>
	<author>rbanffy</author>
	<datestamp>1256556060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>They have a C compiler. That's all we need.</p></htmltext>
<tokenext>They have a C compiler .
That 's all we need .</tokentext>
<sentencetext>They have a C compiler.
That's all we need.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870111</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871925
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870979
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870615
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29884861
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_45</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871191
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870097
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29872953
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29875195
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871669
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29873853
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870107
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_39</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29878157
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870097
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_42</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870445
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29878131
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870661
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_38</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870573
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870247
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29874303
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870107
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870325
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870111
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29882437
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29872491
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871581
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870895
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870297
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870111
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29872995
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870265
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870111
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870289
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870247
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_37</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871121
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870775
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_40</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871085
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870107
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870653
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870387
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871093
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870451
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870387
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870299
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870111
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29872069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870447
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870265
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870111
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870535
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870939
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870387
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870823
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870107
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870553
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870111
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870327
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870247
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_43</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29873941
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870775
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29873819
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871661
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870629
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870149
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870285
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870247
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871503
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_44</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871009
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870247
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_35</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871239
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870149
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870711
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870793
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870521
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_36</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870765
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870265
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870111
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_41</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871031
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29876641
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870247
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29878187
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29872491
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870879
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870265
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870111
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870135
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29887063
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871059
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870311
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870149
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29877203
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870447
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870265
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870111
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_26_0711218_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29872553
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870585
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_26_0711218.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870521
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870793
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_26_0711218.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870247
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29876641
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870289
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870327
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870573
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870285
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871009
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_26_0711218.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871661
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29873819
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_26_0711218.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29872491
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29878187
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29882437
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_26_0711218.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870775
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29873941
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871121
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_26_0711218.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870063
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870615
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870979
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871925
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870711
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29884861
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870149
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870629
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870311
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871059
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871239
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871031
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870445
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871503
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870135
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870585
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29872553
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870535
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29887063
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29872953
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_26_0711218.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870641
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_26_0711218.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870973
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_26_0711218.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870107
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871085
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29873853
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870823
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29874303
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_26_0711218.16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871669
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29875195
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_26_0711218.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870763
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_26_0711218.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870255
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_26_0711218.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870345
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_26_0711218.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870387
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870939
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870653
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870451
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871093
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_26_0711218.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870097
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29878157
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871191
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_26_0711218.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870661
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29878131
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_26_0711218.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870111
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870297
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870895
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29871581
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870265
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870879
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29872995
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870765
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870447
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29872069
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29877203
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870299
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870325
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_26_0711218.29870553
</commentlist>
</conversation>
