<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_11_16_1849259</id>
	<title>100 Million-Core Supercomputers Coming By 2018</title>
	<author>timothy</author>
	<datestamp>1258397760000</datestamp>
	<htmltext>CWmike writes <i>"As amazing as today's supercomputing systems are, they remain primitive and current designs soak up too much power, space and money. And as big as they are today, supercomputers aren't big enough &mdash; a key topic for some of the estimated 11,000 people now gathering in Portland, Ore. for the 22nd annual supercomputing conference, SC09, will be <a href="http://www.computerworld.com/s/article/9140928/Supercomputers\_with\_100\_million\_cores\_coming\_by\_2018">the next performance goal: an exascale system</a>. Today, supercomputers are <a href="http://www.computerworld.com/s/article/print/324410/Supercomputer\_race\_It\_s\_a\_tricky\_task\_to\_boost\_and\_measure\_system\_speed?taxonomyName=Hardware&amp;taxonomyId=12">well short of an exascale</a>. The world's fastest system at Oak Ridge National Laboratory, according to the just released Top500 list, is a Cray XT5 system, which has 224,256 processing cores from six-core Opteron chips made by Advanced Micro Devices Inc. (AMD). The Jaguar is capable of a peak performance of 2.3 petaflops. But Jaguar's record is just a blip, a fleeting benchmark. The US Department of Energy has already begun holding workshops on building a system that's 1,000 times more powerful &mdash; an exascale system, said Buddy Bland, project director at the Oak Ridge Leadership Computing Facility that includes Jaguar. The exascale systems will be needed for high-resolution climate models, bio energy products and smart grid development as well as fusion energy design. The <a href="http:www.iter.org/default.aspx">latter project is now under way in France</a>: the International Thermonuclear Experimental Reactor, which the US is co-developing. They're expected to arrive in 2018 &mdash; in line with Moore's Law &mdash; which helps to explain the roughly 10-year development period. But the problems involved in reaching exaflop scale go well beyond Moore's Law."</i></htmltext>
<tokenext>CWmike writes " As amazing as today 's supercomputing systems are , they remain primitive and current designs soak up too much power , space and money .
And as big as they are today , supercomputers are n't big enough    a key topic for some of the estimated 11,000 people now gathering in Portland , Ore. for the 22nd annual supercomputing conference , SC09 , will be the next performance goal : an exascale system .
Today , supercomputers are well short of an exascale .
The world 's fastest system at Oak Ridge National Laboratory , according to the just released Top500 list , is a Cray XT5 system , which has 224,256 processing cores from six-core Opteron chips made by Advanced Micro Devices Inc. ( AMD ) . The Jaguar is capable of a peak performance of 2.3 petaflops .
But Jaguar 's record is just a blip , a fleeting benchmark .
The US Department of Energy has already begun holding workshops on building a system that 's 1,000 times more powerful    an exascale system , said Buddy Bland , project director at the Oak Ridge Leadership Computing Facility that includes Jaguar .
The exascale systems will be needed for high-resolution climate models , bio energy products and smart grid development as well as fusion energy design .
The latter project is now under way in France : the International Thermonuclear Experimental Reactor , which the US is co-developing .
They 're expected to arrive in 2018    in line with Moore 's Law    which helps to explain the roughly 10-year development period .
But the problems involved in reaching exaflop scale go well beyond Moore 's Law .
"</tokentext>
<sentencetext>CWmike writes "As amazing as today's supercomputing systems are, they remain primitive and current designs soak up too much power, space and money.
And as big as they are today, supercomputers aren't big enough — a key topic for some of the estimated 11,000 people now gathering in Portland, Ore. for the 22nd annual supercomputing conference, SC09, will be the next performance goal: an exascale system.
Today, supercomputers are well short of an exascale.
The world's fastest system at Oak Ridge National Laboratory, according to the just released Top500 list, is a Cray XT5 system, which has 224,256 processing cores from six-core Opteron chips made by Advanced Micro Devices Inc. (AMD). The Jaguar is capable of a peak performance of 2.3 petaflops.
But Jaguar's record is just a blip, a fleeting benchmark.
The US Department of Energy has already begun holding workshops on building a system that's 1,000 times more powerful — an exascale system, said Buddy Bland, project director at the Oak Ridge Leadership Computing Facility that includes Jaguar.
The exascale systems will be needed for high-resolution climate models, bio energy products and smart grid development as well as fusion energy design.
The latter project is now under way in France: the International Thermonuclear Experimental Reactor, which the US is co-developing.
They're expected to arrive in 2018 — in line with Moore's Law — which helps to explain the roughly 10-year development period.
But the problems involved in reaching exaflop scale go well beyond Moore's Law.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122802</id>
	<title>Re:100 Million?</title>
	<author>ari\_j</author>
	<datestamp>1258370820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Therefore, cell phones on January 2, 2018 will have 100 million processor cores.</htmltext>
<tokenext>Therefore , cell phones on January 2 , 2018 will have 100 million processor cores .</tokentext>
<sentencetext>Therefore, cell phones on January 2, 2018 will have 100 million processor cores.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119694</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119642</id>
	<title>Portland, Ore?</title>
	<author>Anonymous</author>
	<datestamp>1258402500000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>0</modscore>
	<htmltext><p>What's "Portland, Ore"?</p><p>Oh, you mean "Portland, Ore<strong>gon</strong>"? This is a website. It isn't fucking Twitter or SMS, how hard is it to write three more letters?</p><p>We know that Slashdot is U.S.A.-centric so we'll forgive the missing "Portland, Oregon, <strong>U.S.A.</strong>" part, but for crying out loud, at least write the whole state name.</p></htmltext>
<tokenext>What 's " Portland , Ore " ? Oh , you mean " Portland , Oregon " ?
This is a website .
It is n't fucking Twitter or SMS , how hard is it to write three more letters ? We know that Slashdot is U.S.A.-centric so we 'll forgive the missing " Portland , Oregon , U.S.A. " part , but for crying out loud , at least write the whole state name .</tokentext>
<sentencetext>What's "Portland, Ore"?Oh, you mean "Portland, Oregon"?
This is a website.
It isn't fucking Twitter or SMS, how hard is it to write three more letters?We know that Slashdot is U.S.A.-centric so we'll forgive the missing "Portland, Oregon, U.S.A." part, but for crying out loud, at least write the whole state name.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125496</id>
	<title>Re:Who's President, Future-boy?</title>
	<author>jd</author>
	<datestamp>1258389660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>MPI is indeed clunky. Most parallel libraries out there are. (MPI and PVM are built on the idea that every node has a shell and that you can run programs by rsh-ing onto that node and running the program.)</p><p>There are more elegant ways of running things. For example, rsh and a remote shell implies that the node has a fairly good-sized OS running, which means you're eating up system resources before you do anything.</p><p>MOSIX seems a better starting point. If you can shunt already-active processes over a network, you don't need a shell, you don't need userspace stuff that's going to be stagnant most of the time. (The active process needn't be a full program, it can be something that pulls in a dynamically-loaded object that has the real program in it when it gets to the destination. In fact, it's better if it ISN'T the program, since then you don't run into the issue of whether kernel threads should migrate or not.)</p><p>For collective operations, you really want to use multicasting (as then you don't waste bandwidth or create excessive latency). Scalable Reliable Multicasting of some sort - NACK-Oriented Reliable Multicasting being the popular form - would be a good starting point. The destination nodes can calculate any node-specific details faster than a master node can, since the master node would have to do so sequentially using no information not present in the collective message.</p><p>Message passing itself is so-so as an approach. You can see from the popularity of the original MACH-based HURD that there are definite limitations to the approach. In the end, whether you're talking about remote I/O, remote procedure calls or other remote operation, you're ultimately talking about sending and receiving data, where that data can be delivered to an active thread, trigger the start of a new thread, change the state of a mutex/futex/semaphore or perform some other very basic task.</p><p>Now, different approaches to parallelism work on different assumptions. MPI assumes you've a well-defined master node and well-defined slaves in almost the reverse of the traditional client-server model. Pi-Occam assumes you've well-defined channels, where channels can be fixed or mobile. I =think= channels are all point-to-point, as Occam was originally developed with the Transputer's mesh topology in mind and still carries some of the assumptions.</p><p>There are "metaschedulers", which seek to schedule operations over a whole cluster as though it was a single virtual machine. It's a good concept, there is no real distinction between N physical machines being one logical server, or one physical server being N logical machines.</p><p>However, "metaschedulers" tend to be run on a master node. If we're running on a MOSIX-type cluster, there isn't any need for a master node. Indeed, there isn't a vast need for a single metascheduler. Rather, you'd want many mini metaschedulers, each concerned with their local area, which interacted with the local scheduler and each other.</p></htmltext>
<tokenext>MPI is indeed clunky .
Most parallel libraries out there are .
( MPI and PVM are built on the idea that every node has a shell and that you can run programs by rsh-ing onto that node and running the program .
) There are more elegant ways of running things .
For example , rsh and a remote shell implies that the node has a fairly good-sized OS running , which means you 're eating up system resources before you do anything.MOSIX seems a better starting point .
If you can shunt already-active processes over a network , you do n't need a shell , you do n't need userspace stuff that 's going to be stagnant most of the time .
( The active process need n't be a full program , it can be something that pulls in a dynamically-loaded object that has the real program in it when it gets to the destination .
In fact , it 's better if it IS N'T the program , since then you do n't run into the issue of whether kernel threads should migrate or not .
) For collective operations , you really want to use multicasting ( as then you do n't waste bandwidth or create excessive latency ) .
Scalable Reliable Multicasting of some sort - NACK-Oriented Reliable Multicasting being the popular form - would be a good starting point .
The destination nodes can calculate any node-specific details faster than a master node can , since the master node would have to do so sequentially using no information not present in the collective message.Message passing itself is so-so as an approach .
You can see from the popularity of the original MACH-based HURD that there are definite limitations to the approach .
In the end , whether you 're talking about remote I/O , remote procedure calls or other remote operation , you 're ultimately talking about sending and receiving data , where that data can be delivered to an active thread , trigger the start of a new thread , change the state of a mutex/futex/semaphore or perform some other very basic task.Now , different approaches to parallelism work on different assumptions .
MPI assumes you 've a well-defined master node and well-defined slaves in almost the reverse of the traditional client-server model .
Pi-Occam assumes you 've well-defined channels , where channels can be fixed or mobile .
I = think = channels are all point-to-point , as Occam was originally developed with the Transputer 's mesh topology in mind and still carries some of the assumptions.There are " metaschedulers " , which seek to schedule operations over a whole cluster as though it was a single virtual machine .
It 's a good concept , there is no real distinction between N physical machines being one logical server , or one physical server being N logical machines.However , " metaschedulers " tend to be run on a master node .
If we 're running on a MOSIX-type cluster , there is n't any need for a master node .
Indeed , there is n't a vast need for a single metascheduler .
Rather , you 'd want many mini metaschedulers , each concerned with their local area , which interacted with the local scheduler and each other .</tokentext>
<sentencetext>MPI is indeed clunky.
Most parallel libraries out there are.
(MPI and PVM are built on the idea that every node has a shell and that you can run programs by rsh-ing onto that node and running the program.
)There are more elegant ways of running things.
For example, rsh and a remote shell implies that the node has a fairly good-sized OS running, which means you're eating up system resources before you do anything.MOSIX seems a better starting point.
If you can shunt already-active processes over a network, you don't need a shell, you don't need userspace stuff that's going to be stagnant most of the time.
(The active process needn't be a full program, it can be something that pulls in a dynamically-loaded object that has the real program in it when it gets to the destination.
In fact, it's better if it ISN'T the program, since then you don't run into the issue of whether kernel threads should migrate or not.
)For collective operations, you really want to use multicasting (as then you don't waste bandwidth or create excessive latency).
Scalable Reliable Multicasting of some sort - NACK-Oriented Reliable Multicasting being the popular form - would be a good starting point.
The destination nodes can calculate any node-specific details faster than a master node can, since the master node would have to do so sequentially using no information not present in the collective message.Message passing itself is so-so as an approach.
You can see from the popularity of the original MACH-based HURD that there are definite limitations to the approach.
In the end, whether you're talking about remote I/O, remote procedure calls or other remote operation, you're ultimately talking about sending and receiving data, where that data can be delivered to an active thread, trigger the start of a new thread, change the state of a mutex/futex/semaphore or perform some other very basic task.Now, different approaches to parallelism work on different assumptions.
MPI assumes you've a well-defined master node and well-defined slaves in almost the reverse of the traditional client-server model.
Pi-Occam assumes you've well-defined channels, where channels can be fixed or mobile.
I =think= channels are all point-to-point, as Occam was originally developed with the Transputer's mesh topology in mind and still carries some of the assumptions.There are "metaschedulers", which seek to schedule operations over a whole cluster as though it was a single virtual machine.
It's a good concept, there is no real distinction between N physical machines being one logical server, or one physical server being N logical machines.However, "metaschedulers" tend to be run on a master node.
If we're running on a MOSIX-type cluster, there isn't any need for a master node.
Indeed, there isn't a vast need for a single metascheduler.
Rather, you'd want many mini metaschedulers, each concerned with their local area, which interacted with the local scheduler and each other.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120076</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121352</id>
	<title>Too little too late!</title>
	<author>syousef</author>
	<datestamp>1258365540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I take it THIS is a machine that might run Vista well. Too late SP3 aka Windows 7 is out.</p></htmltext>
<tokenext>I take it THIS is a machine that might run Vista well .
Too late SP3 aka Windows 7 is out .</tokentext>
<sentencetext>I take it THIS is a machine that might run Vista well.
Too late SP3 aka Windows 7 is out.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120484</id>
	<title>stupid</title>
	<author>Anonymous</author>
	<datestamp>1258405140000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>"But the problems involved in reaching exaflop scale go well beyond Moore's Law."</p><p>
&nbsp; The above quote shows quite well that the writer doesn't understand what Moore's Law is about.</p></div>
	</htmltext>
<tokenext>" But the problems involved in reaching exaflop scale go well beyond Moore 's Law .
"   The above quote shows quite well that the writer does n't understand what Moore 's Law is about .</tokentext>
<sentencetext>"But the problems involved in reaching exaflop scale go well beyond Moore's Law.
"
  The above quote shows quite well that the writer doesn't understand what Moore's Law is about.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121552</id>
	<title>Re:Speaking of heat</title>
	<author>DarthVain</author>
	<datestamp>1258366500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Ah Heisenberg you crafty devil!</p></htmltext>
<tokenext>Ah Heisenberg you crafty devil !</tokentext>
<sentencetext>Ah Heisenberg you crafty devil!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119960</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120648</id>
	<title>Re:AMD vs Intel</title>
	<author>Anonymous</author>
	<datestamp>1258362600000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>As much as I think AMD are a bunch of crybabies, they made a wise decision early on with Hypertransport and with the IMC.  They had a solution that worked well from the desktop on up to HPC, while Intel mainly targeted the desktop up through 2P systems.  They certainly had solutions for 4P and above, but none were as elegant as the Opterons were for that space.</p><p>Note that with Nehalem (and the EX version) this will change and Opteron is no longer compelling in, well, any space other than on a price/performance basis possibly.</p></htmltext>
<tokenext>As much as I think AMD are a bunch of crybabies , they made a wise decision early on with Hypertransport and with the IMC .
They had a solution that worked well from the desktop on up to HPC , while Intel mainly targeted the desktop up through 2P systems .
They certainly had solutions for 4P and above , but none were as elegant as the Opterons were for that space.Note that with Nehalem ( and the EX version ) this will change and Opteron is no longer compelling in , well , any space other than on a price/performance basis possibly .</tokentext>
<sentencetext>As much as I think AMD are a bunch of crybabies, they made a wise decision early on with Hypertransport and with the IMC.
They had a solution that worked well from the desktop on up to HPC, while Intel mainly targeted the desktop up through 2P systems.
They certainly had solutions for 4P and above, but none were as elegant as the Opterons were for that space.Note that with Nehalem (and the EX version) this will change and Opteron is no longer compelling in, well, any space other than on a price/performance basis possibly.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119704</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30164830</id>
	<title>Wait...</title>
	<author>DanielSmedegaardBuus</author>
	<datestamp>1258629720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>That settles it. I'm NOT purchasing any new hardware until 2018.</p></htmltext>
<tokenext>That settles it .
I 'm NOT purchasing any new hardware until 2018 .</tokentext>
<sentencetext>That settles it.
I'm NOT purchasing any new hardware until 2018.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120624</id>
	<title>Ok, Ok, we'll be more specific...</title>
	<author>raftpeople</author>
	<datestamp>1258362540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>"Portland, Oregon, U.S.A., Earth, Milky Way, Cluster TXH-170718, Universe 01 (we think)"</htmltext>
<tokenext>" Portland , Oregon , U.S.A. , Earth , Milky Way , Cluster TXH-170718 , Universe 01 ( we think ) "</tokentext>
<sentencetext>"Portland, Oregon, U.S.A., Earth, Milky Way, Cluster TXH-170718, Universe 01 (we think)"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119642</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119862</id>
	<title>So far to go!</title>
	<author>Anonymous</author>
	<datestamp>1258403100000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The fastest system only has 224k Cores?  Oh, dear.  We definitely need bigger systems, then.<br>And I suppose Deep Thought has nothing to worry about yet, either.  Yet.<nobr> <wbr></nobr>:D<br>(The fictional version, that is.  The "real" one has already been outdone by Rybka.)</p></htmltext>
<tokenext>The fastest system only has 224k Cores ?
Oh , dear .
We definitely need bigger systems , then.And I suppose Deep Thought has nothing to worry about yet , either .
Yet. : D ( The fictional version , that is .
The " real " one has already been outdone by Rybka .
)</tokentext>
<sentencetext>The fastest system only has 224k Cores?
Oh, dear.
We definitely need bigger systems, then.And I suppose Deep Thought has nothing to worry about yet, either.
Yet. :D(The fictional version, that is.
The "real" one has already been outdone by Rybka.
)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30124498</id>
	<title>Wow, if each core could...</title>
	<author>caywen</author>
	<datestamp>1258380060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If each core could model one small patch of skin on one of those busty 3D models, then, well,<nobr> <wbr></nobr>... wow!</p></htmltext>
<tokenext>If each core could model one small patch of skin on one of those busty 3D models , then , well , ... wow !</tokentext>
<sentencetext>If each core could model one small patch of skin on one of those busty 3D models, then, well, ... wow!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122872</id>
	<title>Re:human brain</title>
	<author>ari\_j</author>
	<datestamp>1258371120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Which human brain?  I can simulate the rational thought processes of 90\% of humans with one vacuum tube.</htmltext>
<tokenext>Which human brain ?
I can simulate the rational thought processes of 90 \ % of humans with one vacuum tube .</tokentext>
<sentencetext>Which human brain?
I can simulate the rational thought processes of 90\% of humans with one vacuum tube.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119830</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30123654</id>
	<title>Re:Who's President, Future-boy?</title>
	<author>Anonymatt</author>
	<datestamp>1258374780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Dude, I liked reading your journal entry.</p></htmltext>
<tokenext>Dude , I liked reading your journal entry .</tokentext>
<sentencetext>Dude, I liked reading your journal entry.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119870</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120106</id>
	<title>Re:How many problems can these systems really solv</title>
	<author>Anonymous</author>
	<datestamp>1258403940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>These computers will be good for solving problems involving lots of independent operations. A processor can process one operation at a time, but since these operations do not depend on each other, the operations can be sent to several processors at once. Imagine a big foreach loop, for example.</p><p>Or maybe someone will want to have 100 million Google Chrome tabs open.</p><p>I'm personally imagining putting dnetc on one of these things.</p></htmltext>
<tokenext>These computers will be good for solving problems involving lots of independent operations .
A processor can process one operation at a time , but since these operations do not depend on each other , the operations can be sent to several processors at once .
Imagine a big foreach loop , for example.Or maybe someone will want to have 100 million Google Chrome tabs open.I 'm personally imagining putting dnetc on one of these things .</tokentext>
<sentencetext>These computers will be good for solving problems involving lots of independent operations.
A processor can process one operation at a time, but since these operations do not depend on each other, the operations can be sent to several processors at once.
Imagine a big foreach loop, for example.Or maybe someone will want to have 100 million Google Chrome tabs open.I'm personally imagining putting dnetc on one of these things.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119624</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119928</id>
	<title>That's a lot</title>
	<author>Anonymous</author>
	<datestamp>1258403280000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Wow, not just one million-core supercomputer, but 100 of them?</p></htmltext>
<tokenext>Wow , not just one million-core supercomputer , but 100 of them ?</tokentext>
<sentencetext>Wow, not just one million-core supercomputer, but 100 of them?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121346</id>
	<title>100 million cores</title>
	<author>joetomato</author>
	<datestamp>1258365480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>To which Oak Ridge National Laboratory replied "Fuck everything, we're doing 500 million cores."</htmltext>
<tokenext>To which Oak Ridge National Laboratory replied " Fuck everything , we 're doing 500 million cores .
"</tokentext>
<sentencetext>To which Oak Ridge National Laboratory replied "Fuck everything, we're doing 500 million cores.
"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121144</id>
	<title>100 million-core supercomputers?</title>
	<author>Tetsujin</author>
	<datestamp>1258364760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Come on, that's just silly.  I can understand why we might a few million-core supercomputers, but who would need 100 of them?</p></htmltext>
<tokenext>Come on , that 's just silly .
I can understand why we might a few million-core supercomputers , but who would need 100 of them ?</tokentext>
<sentencetext>Come on, that's just silly.
I can understand why we might a few million-core supercomputers, but who would need 100 of them?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119638</id>
	<title>Why 100 million processors?</title>
	<author>140Mandak262Jamuna</author>
	<datestamp>1258402500000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext>Technically, shouldn't 640K processors be enough for every one?</htmltext>
<tokenext>Technically , should n't 640K processors be enough for every one ?</tokentext>
<sentencetext>Technically, shouldn't 640K processors be enough for every one?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119918</id>
	<title>I can visualize a third life</title>
	<author>ub3r n3u7r4l1st</author>
	<datestamp>1258403280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Yup. 100-million core driven Second-Life server, like the Matrix.</p></htmltext>
<tokenext>Yup .
100-million core driven Second-Life server , like the Matrix .</tokentext>
<sentencetext>Yup.
100-million core driven Second-Life server, like the Matrix.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125690</id>
	<title>Re:Speaking of heat</title>
	<author>dontmakemethink</author>
	<datestamp>1258391760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I wouldn't even power up a 100M core box that couldn't figure out what to do with its own heat.</p><p>"Sort that heat out or I'll send you into a perpetual loop re-transcoding 'Iceland's Got Talent' re-runs.  You hear me?!"</p></htmltext>
<tokenext>I would n't even power up a 100M core box that could n't figure out what to do with its own heat .
" Sort that heat out or I 'll send you into a perpetual loop re-transcoding 'Iceland 's Got Talent ' re-runs .
You hear me ? !
"</tokentext>
<sentencetext>I wouldn't even power up a 100M core box that couldn't figure out what to do with its own heat.
"Sort that heat out or I'll send you into a perpetual loop re-transcoding 'Iceland's Got Talent' re-runs.
You hear me?!
"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119960</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119614</id>
	<title>Limits on simulation.</title>
	<author>140Mandak262Jamuna</author>
	<datestamp>1258402380000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext>The programming techniques and mathematical formulations needed to take advantage of such very large number of processors continue to be the main stumbling blocks. Some kind of simulations parallelize naturally. Time accurate fulid flow simulation for example is very easy to parallelize and technically you can devote a processor for each element and do time marching nicely. But not all physics problems are amenable to parallelization. Further even in the nice cases like fluid flow, if one tries to do solution adaptive meshing, no uniform grids etc, the time step slows down so much the simulation takes too long even on a 100 million processor machine. <p>

The CFL condition that limits the maximum time step one can take shows no sign of relenting. Score has been Courant (the C in CFL) 1, Moore 0 for the last three decades.</p></htmltext>
<tokenext>The programming techniques and mathematical formulations needed to take advantage of such very large number of processors continue to be the main stumbling blocks .
Some kind of simulations parallelize naturally .
Time accurate fulid flow simulation for example is very easy to parallelize and technically you can devote a processor for each element and do time marching nicely .
But not all physics problems are amenable to parallelization .
Further even in the nice cases like fluid flow , if one tries to do solution adaptive meshing , no uniform grids etc , the time step slows down so much the simulation takes too long even on a 100 million processor machine .
The CFL condition that limits the maximum time step one can take shows no sign of relenting .
Score has been Courant ( the C in CFL ) 1 , Moore 0 for the last three decades .</tokentext>
<sentencetext>The programming techniques and mathematical formulations needed to take advantage of such very large number of processors continue to be the main stumbling blocks.
Some kind of simulations parallelize naturally.
Time accurate fulid flow simulation for example is very easy to parallelize and technically you can devote a processor for each element and do time marching nicely.
But not all physics problems are amenable to parallelization.
Further even in the nice cases like fluid flow, if one tries to do solution adaptive meshing, no uniform grids etc, the time step slows down so much the simulation takes too long even on a 100 million processor machine.
The CFL condition that limits the maximum time step one can take shows no sign of relenting.
Score has been Courant (the C in CFL) 1, Moore 0 for the last three decades.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120076</id>
	<title>Re:Who's President, Future-boy?</title>
	<author>Anonymous</author>
	<datestamp>1258403880000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p><div class="quote"><p>Wait, what?  You lost me.  Are you from the future?  How can you describe the state of the art as "primitive"?</p></div><p>
Pretty easily, actually.  There are lots of problems to solve, not the least of which is programming model.  We're still basically using MPI to drive these machines.  That will not cut it on a 100-million core machine where each socket has on the order of 100 cores.  MPI can very easily be described as "primitive," as well as "clunky," "tedious" and "a pain in the ***."
</p><p>
How do we checkpoint a million-core program?  How do we debug a million-core program?  We are in the infancy of computing.
</p></div>
	</htmltext>
<tokenext>Wait , what ?
You lost me .
Are you from the future ?
How can you describe the state of the art as " primitive " ?
Pretty easily , actually .
There are lots of problems to solve , not the least of which is programming model .
We 're still basically using MPI to drive these machines .
That will not cut it on a 100-million core machine where each socket has on the order of 100 cores .
MPI can very easily be described as " primitive , " as well as " clunky , " " tedious " and " a pain in the * * * .
" How do we checkpoint a million-core program ?
How do we debug a million-core program ?
We are in the infancy of computing .</tokentext>
<sentencetext>Wait, what?
You lost me.
Are you from the future?
How can you describe the state of the art as "primitive"?
Pretty easily, actually.
There are lots of problems to solve, not the least of which is programming model.
We're still basically using MPI to drive these machines.
That will not cut it on a 100-million core machine where each socket has on the order of 100 cores.
MPI can very easily be described as "primitive," as well as "clunky," "tedious" and "a pain in the ***.
"

How do we checkpoint a million-core program?
How do we debug a million-core program?
We are in the infancy of computing.

	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119430</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30123388</id>
	<title>Re:Limits on simulation.</title>
	<author>speed of lightx2</author>
	<datestamp>1258373340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Not true. That is only true on explicit discretizations, CFL doesn't apply for implicit problems. On the other hand, you do have to invert a large matrix, but there are tools for doing that with large sparse matrices.</htmltext>
<tokenext>Not true .
That is only true on explicit discretizations , CFL does n't apply for implicit problems .
On the other hand , you do have to invert a large matrix , but there are tools for doing that with large sparse matrices .</tokentext>
<sentencetext>Not true.
That is only true on explicit discretizations, CFL doesn't apply for implicit problems.
On the other hand, you do have to invert a large matrix, but there are tools for doing that with large sparse matrices.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119614</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121196</id>
	<title>"now gathering in Portland, Ore"</title>
	<author>Tetsujin</author>
	<datestamp>1258364880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Can I just say...  FUCK YES.  Thank you!</p><p>As someone who grew up in the Portland (Maine) area it annoys me to no end when people talk about things in "Portland" and neglect to disambiguate - especially when they're talking about the <em>other</em> Portland.<nobr> <wbr></nobr>:)</p></htmltext>
<tokenext>Can I just say... FUCK YES .
Thank you ! As someone who grew up in the Portland ( Maine ) area it annoys me to no end when people talk about things in " Portland " and neglect to disambiguate - especially when they 're talking about the other Portland .
: )</tokentext>
<sentencetext>Can I just say...  FUCK YES.
Thank you!As someone who grew up in the Portland (Maine) area it annoys me to no end when people talk about things in "Portland" and neglect to disambiguate - especially when they're talking about the other Portland.
:)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30128260</id>
	<title>Re:Speaking of heat</title>
	<author>hesaigo999ca</author>
	<datestamp>1258470000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>&gt;Seriously, how much heat is that thing going to put out<br>Enough to burn a hole in the ozone layer....or melt the arctic polar cap.<br>Seriously though, if we could also develop something to harness the neat this thing will give off, it would be a double whammy!</p></htmltext>
<tokenext>&gt; Seriously , how much heat is that thing going to put outEnough to burn a hole in the ozone layer....or melt the arctic polar cap.Seriously though , if we could also develop something to harness the neat this thing will give off , it would be a double whammy !</tokentext>
<sentencetext>&gt;Seriously, how much heat is that thing going to put outEnough to burn a hole in the ozone layer....or melt the arctic polar cap.Seriously though, if we could also develop something to harness the neat this thing will give off, it would be a double whammy!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119960</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119604</id>
	<title>Re:100 Million?</title>
	<author>Anonymous</author>
	<datestamp>1258402380000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Because even though the number's "effect" on you diminishes as it goes up, doesn't mean it is still significant.  There's a reason Engineers use quantitative instead of qualitative.<br>How do you tell the difference between hot and really hot or really really hot?</p><p>Really.</p><p>How about the difference between 10, 20 and 30?</p><p>10</p><p>Which gives you more information?</p></htmltext>
<tokenext>Because even though the number 's " effect " on you diminishes as it goes up , does n't mean it is still significant .
There 's a reason Engineers use quantitative instead of qualitative.How do you tell the difference between hot and really hot or really really hot ? Really.How about the difference between 10 , 20 and 30 ? 10Which gives you more information ?</tokentext>
<sentencetext>Because even though the number's "effect" on you diminishes as it goes up, doesn't mean it is still significant.
There's a reason Engineers use quantitative instead of qualitative.How do you tell the difference between hot and really hot or really really hot?Really.How about the difference between 10, 20 and 30?10Which gives you more information?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30124572</id>
	<title>Re:human brain</title>
	<author>caywen</author>
	<datestamp>1258380600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I have this feeling that simulating the human brain ends up being practically meaningless without simulating the rest of the body. And, having some meaningful, long-term interaction with either the real world or an incredibly sophisticated simulation. I'm assuming one would "grow" the brain in the simulation rather than, say, capturing the full chemical state of every neuron in a living person's brain (which sounds pretty impossible to me).</p></htmltext>
<tokenext>I have this feeling that simulating the human brain ends up being practically meaningless without simulating the rest of the body .
And , having some meaningful , long-term interaction with either the real world or an incredibly sophisticated simulation .
I 'm assuming one would " grow " the brain in the simulation rather than , say , capturing the full chemical state of every neuron in a living person 's brain ( which sounds pretty impossible to me ) .</tokentext>
<sentencetext>I have this feeling that simulating the human brain ends up being practically meaningless without simulating the rest of the body.
And, having some meaningful, long-term interaction with either the real world or an incredibly sophisticated simulation.
I'm assuming one would "grow" the brain in the simulation rather than, say, capturing the full chemical state of every neuron in a living person's brain (which sounds pretty impossible to me).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119830</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30124384</id>
	<title>Re:human brain</title>
	<author>Anonymous</author>
	<datestamp>1258379160000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>One http://en.wikipedia.org/wiki/Turing\_completeness</p><p>The issue is understanding of the brain far more so than computing power.</p></htmltext>
<tokenext>One http : //en.wikipedia.org/wiki/Turing \ _completenessThe issue is understanding of the brain far more so than computing power .</tokentext>
<sentencetext>One http://en.wikipedia.org/wiki/Turing\_completenessThe issue is understanding of the brain far more so than computing power.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119830</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120206</id>
	<title>Re:Oink, oink</title>
	<author>David Greene</author>
	<datestamp>1258404300000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>
Sounds like a pork program.  What are "bio energy products", anyway.  Ethanol?
</p></div><p>
I'm no expert on this, but I would guess the idea is to use the processing power to model different kinds of molecular manipulation to see what kind of energy density we can get out of manufactured biological goo.  Combustion modeling is a common problem solved by HPC systems.  Or maybe we can expore how to use bacteria created to process waste and give off energy as a byproduct.  I don't know, the possibilities are endless.
</p><p><div class="quote"><p>
It's striking how few supercomputers are sold to commercial companies.   Even the military doesn't use them much any more.</p></div><p>Define "supercomputer."  Sony uses them.  So does Boeing.  The auto industry uses clusters to model crashes, but I believe that's more limited by the design of the off-the-shelf software than anything.  They could certainly run on supercomputer-class machines if the vendors ported them.
</p><p>And the military uses them <em>a lot</em>.  Much of the DOE research done on these machines is probably defense-driven.</p></div>
	</htmltext>
<tokenext>Sounds like a pork program .
What are " bio energy products " , anyway .
Ethanol ? I 'm no expert on this , but I would guess the idea is to use the processing power to model different kinds of molecular manipulation to see what kind of energy density we can get out of manufactured biological goo .
Combustion modeling is a common problem solved by HPC systems .
Or maybe we can expore how to use bacteria created to process waste and give off energy as a byproduct .
I do n't know , the possibilities are endless .
It 's striking how few supercomputers are sold to commercial companies .
Even the military does n't use them much any more.Define " supercomputer .
" Sony uses them .
So does Boeing .
The auto industry uses clusters to model crashes , but I believe that 's more limited by the design of the off-the-shelf software than anything .
They could certainly run on supercomputer-class machines if the vendors ported them .
And the military uses them a lot .
Much of the DOE research done on these machines is probably defense-driven .</tokentext>
<sentencetext>
Sounds like a pork program.
What are "bio energy products", anyway.
Ethanol?

I'm no expert on this, but I would guess the idea is to use the processing power to model different kinds of molecular manipulation to see what kind of energy density we can get out of manufactured biological goo.
Combustion modeling is a common problem solved by HPC systems.
Or maybe we can expore how to use bacteria created to process waste and give off energy as a byproduct.
I don't know, the possibilities are endless.
It's striking how few supercomputers are sold to commercial companies.
Even the military doesn't use them much any more.Define "supercomputer.
"  Sony uses them.
So does Boeing.
The auto industry uses clusters to model crashes, but I believe that's more limited by the design of the off-the-shelf software than anything.
They could certainly run on supercomputer-class machines if the vendors ported them.
And the military uses them a lot.
Much of the DOE research done on these machines is probably defense-driven.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119698</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120296</id>
	<title>Sir, all 1 million cores have failed..</title>
	<author>Anonymous</author>
	<datestamp>1258404540000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>if only they had built 1,000,001!</p></htmltext>
<tokenext>if only they had built 1,000,001 !</tokentext>
<sentencetext>if only they had built 1,000,001!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125640</id>
	<title>Re:100 Million?</title>
	<author>bandmassa</author>
	<datestamp>1258391220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>How about Peak Oil coming by 2015? That's going to put the brakes on the number of cores in our computers. Won't matter what we call it. When the energy runs out, there'll be no food, let alone new computers.</htmltext>
<tokenext>How about Peak Oil coming by 2015 ?
That 's going to put the brakes on the number of cores in our computers .
Wo n't matter what we call it .
When the energy runs out , there 'll be no food , let alone new computers .</tokentext>
<sentencetext>How about Peak Oil coming by 2015?
That's going to put the brakes on the number of cores in our computers.
Won't matter what we call it.
When the energy runs out, there'll be no food, let alone new computers.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30123770</id>
	<title>Add as many processors as you want....</title>
	<author>Ozlanthos</author>
	<datestamp>1258375500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I am sure Stalker SOC will still crash.
<br>
<br>

-Oz</htmltext>
<tokenext>I am sure Stalker SOC will still crash .
-Oz</tokentext>
<sentencetext>I am sure Stalker SOC will still crash.
-Oz</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122880</id>
	<title>Re:Limits on simulation.</title>
	<author>Anonymous</author>
	<datestamp>1258371120000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>No, fluid flow parallelizes horribly - you have to calculate the time steps in order.</p></htmltext>
<tokenext>No , fluid flow parallelizes horribly - you have to calculate the time steps in order .</tokentext>
<sentencetext>No, fluid flow parallelizes horribly - you have to calculate the time steps in order.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119614</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119860</id>
	<title>Windows Vista</title>
	<author>Anonymous</author>
	<datestamp>1258403100000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The question is: will it be enough to run Aero?</p></htmltext>
<tokenext>The question is : will it be enough to run Aero ?</tokentext>
<sentencetext>The question is: will it be enough to run Aero?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120182</id>
	<title>Re:100 Million?</title>
	<author>Anonymous</author>
	<datestamp>1258404240000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext><p>Can you translate that in "Library of Congress's"?</p></htmltext>
<tokenext>Can you translate that in " Library of Congress 's " ?</tokentext>
<sentencetext>Can you translate that in "Library of Congress's"?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30126872</id>
	<title>Each core simulates a neuron....</title>
	<author>jameskojiro</author>
	<datestamp>1258449660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Make a Beowulf cluster of these and violla, simulated brain!</p></htmltext>
<tokenext>Make a Beowulf cluster of these and violla , simulated brain !</tokentext>
<sentencetext>Make a Beowulf cluster of these and violla, simulated brain!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30123620</id>
	<title>Re:How many problems can these systems really solv</title>
	<author>jstults</author>
	<datestamp>1258374600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>How many problems can these systems really solve?</p></div><p>Well, only the ones where you need to conserve mass, momentum and energy; pretty niche market really...</p></div>
	</htmltext>
<tokenext>How many problems can these systems really solve ? Well , only the ones where you need to conserve mass , momentum and energy ; pretty niche market really.. .</tokentext>
<sentencetext>How many problems can these systems really solve?Well, only the ones where you need to conserve mass, momentum and energy; pretty niche market really...
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119624</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30126904</id>
	<title>Re:Speaking of heat</title>
	<author>Ultracrepidarian</author>
	<datestamp>1258450020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Actually, the known universe is just a part of a massive simulation.</htmltext>
<tokenext>Actually , the known universe is just a part of a massive simulation .</tokentext>
<sentencetext>Actually, the known universe is just a part of a massive simulation.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119960</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119496</id>
	<title>yeah but yeah but</title>
	<author>Anonymous</author>
	<datestamp>1258402080000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>my old trusty VIC20 ftw</p></htmltext>
<tokenext>my old trusty VIC20 ftw</tokentext>
<sentencetext>my old trusty VIC20 ftw</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120286</id>
	<title>Re:Who's President, Future-boy?</title>
	<author>Anonymous</author>
	<datestamp>1258404480000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>-Peter</p></div><p>Peter! Hey how's it going!  We have been looking for you to help out this poor unfortunate guy Paul over here.   Ok truth is.. there are millions of Pauls I hope you don't mind spreading your wealth around.</p></div>
	</htmltext>
<tokenext>-PeterPeter !
Hey how 's it going !
We have been looking for you to help out this poor unfortunate guy Paul over here .
Ok truth is.. there are millions of Pauls I hope you do n't mind spreading your wealth around .</tokentext>
<sentencetext>-PeterPeter!
Hey how's it going!
We have been looking for you to help out this poor unfortunate guy Paul over here.
Ok truth is.. there are millions of Pauls I hope you don't mind spreading your wealth around.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119430</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30123288</id>
	<title>Re:100 Million?</title>
	<author>b4upoo</author>
	<datestamp>1258372980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>   But can it play chess?</p></htmltext>
<tokenext>But can it play chess ?</tokentext>
<sentencetext>   But can it play chess?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30127752</id>
	<title>Storage?</title>
	<author>mattr</author>
	<datestamp>1258464360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If you are fighting about 1000 vs. 1024 cores, you haven't got enough of them yet.</p><p>10^8 cores isn't that much.<br>Human body: 10^14 cells, 10^11 neurons, 10^14 synapses.<br>It would be enough to simulate a brain maybe if each core simulated 1000 neurons and it is interconnected as well as a brain. Basically if it's a brain.</p><p>You could simulate a brain at 1000 neurons per core but it has to be cheap enough, small enough, low enough power consumption and dissipation, well enough interconnected and - okay basically you have to have a brain.</p><p>It would be very useful in biology, though even at the recent petacomputer discussions there was question about whether data should really be stored, it is so expensive to do so. Ideally you would put a drop of blood in and the data would be driven in real time through the system, which would<br>The problem is data storage. I was in a seminar about the petacomputer being built in Japan. The people were saying that  there is a real question about whether data should be stored and how.</p></htmltext>
<tokenext>If you are fighting about 1000 vs. 1024 cores , you have n't got enough of them yet.10 ^ 8 cores is n't that much.Human body : 10 ^ 14 cells , 10 ^ 11 neurons , 10 ^ 14 synapses.It would be enough to simulate a brain maybe if each core simulated 1000 neurons and it is interconnected as well as a brain .
Basically if it 's a brain.You could simulate a brain at 1000 neurons per core but it has to be cheap enough , small enough , low enough power consumption and dissipation , well enough interconnected and - okay basically you have to have a brain.It would be very useful in biology , though even at the recent petacomputer discussions there was question about whether data should really be stored , it is so expensive to do so .
Ideally you would put a drop of blood in and the data would be driven in real time through the system , which wouldThe problem is data storage .
I was in a seminar about the petacomputer being built in Japan .
The people were saying that there is a real question about whether data should be stored and how .</tokentext>
<sentencetext>If you are fighting about 1000 vs. 1024 cores, you haven't got enough of them yet.10^8 cores isn't that much.Human body: 10^14 cells, 10^11 neurons, 10^14 synapses.It would be enough to simulate a brain maybe if each core simulated 1000 neurons and it is interconnected as well as a brain.
Basically if it's a brain.You could simulate a brain at 1000 neurons per core but it has to be cheap enough, small enough, low enough power consumption and dissipation, well enough interconnected and - okay basically you have to have a brain.It would be very useful in biology, though even at the recent petacomputer discussions there was question about whether data should really be stored, it is so expensive to do so.
Ideally you would put a drop of blood in and the data would be driven in real time through the system, which wouldThe problem is data storage.
I was in a seminar about the petacomputer being built in Japan.
The people were saying that  there is a real question about whether data should be stored and how.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120776</id>
	<title>Re:human brain</title>
	<author>CannonballHead</author>
	<datestamp>1258363200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Define "simulate" in this context.  Processing power?  Creativity?  Originality?  Ingenuity?  I didn't think any number of cores could "cause" creativity... aside from a "brute force" method.  Try-every-possibility-and-see-if-one-works.</htmltext>
<tokenext>Define " simulate " in this context .
Processing power ?
Creativity ? Originality ?
Ingenuity ? I did n't think any number of cores could " cause " creativity... aside from a " brute force " method .
Try-every-possibility-and-see-if-one-works .</tokentext>
<sentencetext>Define "simulate" in this context.
Processing power?
Creativity?  Originality?
Ingenuity?  I didn't think any number of cores could "cause" creativity... aside from a "brute force" method.
Try-every-possibility-and-see-if-one-works.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119830</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122548</id>
	<title>I'd like to see. . .</title>
	<author>Fantastic Lad</author>
	<datestamp>1258369680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Department of energy?</p><p>Mapping weather systems?</p><p>Cracking high bit encryption schemes?  Listening to every phone call happening on the planet and mapping social patterns?</p><p>BORING!</p><p>No, I want to see a 100 million core supercomputer render one of those <a href="http://www.skytopia.com/project/fractal/mandelbulb.html" title="skytopia.com">3D "Mandelbulbs"</a> [skytopia.com] and let me do some real-time exploring with a VR helmet.</p><p>Now THAT would be a worthy use for such resources!</p><p>That and being able to grow virtual beings from DNA samples.</p><p>-FL</p></htmltext>
<tokenext>Department of energy ? Mapping weather systems ? Cracking high bit encryption schemes ?
Listening to every phone call happening on the planet and mapping social patterns ? BORING ! No , I want to see a 100 million core supercomputer render one of those 3D " Mandelbulbs " [ skytopia.com ] and let me do some real-time exploring with a VR helmet.Now THAT would be a worthy use for such resources ! That and being able to grow virtual beings from DNA samples.-FL</tokentext>
<sentencetext>Department of energy?Mapping weather systems?Cracking high bit encryption schemes?
Listening to every phone call happening on the planet and mapping social patterns?BORING!No, I want to see a 100 million core supercomputer render one of those 3D "Mandelbulbs" [skytopia.com] and let me do some real-time exploring with a VR helmet.Now THAT would be a worthy use for such resources!That and being able to grow virtual beings from DNA samples.-FL</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121212</id>
	<title>Re:Portland, Ore?</title>
	<author>Anonymous</author>
	<datestamp>1258365000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p> <b>how hard is it to write three more letters?</b></p> </div><p>

If I had to hazard a guess, I'd say it is exactly 3 letters harder to write 3 more letters.
</p><p>
Unless of course, you bold the last 3 letters, as you've done...then you have the html code to type so it ends up being like 10 letters more difficult. </p><p>

Then again...it depends on whether or not you mean 3 "additional" letters or the phrase "3 more letters." because, in that case, it's like 13...even more if you bold some of them...</p></div>
	</htmltext>
<tokenext>how hard is it to write three more letters ?
If I had to hazard a guess , I 'd say it is exactly 3 letters harder to write 3 more letters .
Unless of course , you bold the last 3 letters , as you 've done...then you have the html code to type so it ends up being like 10 letters more difficult .
Then again...it depends on whether or not you mean 3 " additional " letters or the phrase " 3 more letters .
" because , in that case , it 's like 13...even more if you bold some of them.. .</tokentext>
<sentencetext> how hard is it to write three more letters?
If I had to hazard a guess, I'd say it is exactly 3 letters harder to write 3 more letters.
Unless of course, you bold the last 3 letters, as you've done...then you have the html code to type so it ends up being like 10 letters more difficult.
Then again...it depends on whether or not you mean 3 "additional" letters or the phrase "3 more letters.
" because, in that case, it's like 13...even more if you bold some of them...
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119642</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122692</id>
	<title>Re:human brain</title>
	<author>Anonymous</author>
	<datestamp>1258370220000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>One?<br>Once you have enough knowledge about how the brain works, it shouldn't matter if it gets simulated in hardware or software.</p></htmltext>
<tokenext>One ? Once you have enough knowledge about how the brain works , it should n't matter if it gets simulated in hardware or software .</tokentext>
<sentencetext>One?Once you have enough knowledge about how the brain works, it shouldn't matter if it gets simulated in hardware or software.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119830</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120644</id>
	<title>Re:How many problems can these systems really solv</title>
	<author>T Murphy</author>
	<datestamp>1258362600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I'm always wary of making an infamous "50 MB of memory is all you'll ever need" type of claim, so I like to believe that we'll figure out how to use greater processing power by the time it gets here. We haven't had too much trouble with that so far. As far as actual use, if we ever get products like Morph (<a href="http://www.youtube.com/watch?v=IX-gTobCJHs" title="youtube.com">http://www.youtube.com/watch?v=IX-gTobCJHs</a> [youtube.com]), there might be a need for massively parallel processing. At the very least, such computing power would likely be needed to make such products.</htmltext>
<tokenext>I 'm always wary of making an infamous " 50 MB of memory is all you 'll ever need " type of claim , so I like to believe that we 'll figure out how to use greater processing power by the time it gets here .
We have n't had too much trouble with that so far .
As far as actual use , if we ever get products like Morph ( http : //www.youtube.com/watch ? v = IX-gTobCJHs [ youtube.com ] ) , there might be a need for massively parallel processing .
At the very least , such computing power would likely be needed to make such products .</tokentext>
<sentencetext>I'm always wary of making an infamous "50 MB of memory is all you'll ever need" type of claim, so I like to believe that we'll figure out how to use greater processing power by the time it gets here.
We haven't had too much trouble with that so far.
As far as actual use, if we ever get products like Morph (http://www.youtube.com/watch?v=IX-gTobCJHs [youtube.com]), there might be a need for massively parallel processing.
At the very least, such computing power would likely be needed to make such products.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119624</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125528</id>
	<title>Re:How many problems can these systems really solv</title>
	<author>Anonymous</author>
	<datestamp>1258389900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Easy solutions =have= been found, but then Inmos was sold off. The mice were furious.</p></htmltext>
<tokenext>Easy solutions = have = been found , but then Inmos was sold off .
The mice were furious .</tokentext>
<sentencetext>Easy solutions =have= been found, but then Inmos was sold off.
The mice were furious.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119624</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119364</id>
	<title>FIRST POST FOR THE ALL NEW GNAA!!!</title>
	<author>Anonymous</author>
	<datestamp>1258401600000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>VISIT US AT IRC.HARDCHATS.COM #GNAA</p><p>fcjioedfj</p><p>Filter error: Don't use so many caps. It's like YELLING.</p></htmltext>
<tokenext>VISIT US AT IRC.HARDCHATS.COM # GNAAfcjioedfjFilter error : Do n't use so many caps .
It 's like YELLING .</tokentext>
<sentencetext>VISIT US AT IRC.HARDCHATS.COM #GNAAfcjioedfjFilter error: Don't use so many caps.
It's like YELLING.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120122</id>
	<title>Re:100 Million?</title>
	<author>Archangel Michael</author>
	<datestamp>1258404000000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><p>One Million Cores and one Sun hot mentioned in the same post, coincidence? I think not!</p></htmltext>
<tokenext>One Million Cores and one Sun hot mentioned in the same post , coincidence ?
I think not !</tokentext>
<sentencetext>One Million Cores and one Sun hot mentioned in the same post, coincidence?
I think not!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122500</id>
	<title>Perspective</title>
	<author>Anonymous</author>
	<datestamp>1258369440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Let&rsquo;s put this into perspective:  Intel&rsquo;s next generation of chips are estimated to have ~2 billion transistors (release in 2010 or 2011).  Moore's Law would predict that chips would have ~32 billion transistors in 2018.  That&rsquo;s an estimated 3.2E+18 transistors spread over the 100-million cores</p><p>That&rsquo;s 32,000,000 times the number of neurons in the human brain.</p></htmltext>
<tokenext>Let    s put this into perspective : Intel    s next generation of chips are estimated to have ~ 2 billion transistors ( release in 2010 or 2011 ) .
Moore 's Law would predict that chips would have ~ 32 billion transistors in 2018 .
That    s an estimated 3.2E + 18 transistors spread over the 100-million coresThat    s 32,000,000 times the number of neurons in the human brain .</tokentext>
<sentencetext>Let’s put this into perspective:  Intel’s next generation of chips are estimated to have ~2 billion transistors (release in 2010 or 2011).
Moore's Law would predict that chips would have ~32 billion transistors in 2018.
That’s an estimated 3.2E+18 transistors spread over the 100-million coresThat’s 32,000,000 times the number of neurons in the human brain.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30123106</id>
	<title>Re:Limits on simulation.</title>
	<author>jstults</author>
	<datestamp>1258372200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>The programming techniques and mathematical formulations needed to take advantage of such very large number of processors continue to be the main stumbling blocks.</p> </div><p>True.</p><p><div class="quote"><p>
The CFL condition that limits the maximum time step one can take shows no sign of relenting. Score has been Courant (the C in CFL) 1, Moore 0 for the last three decades.</p></div><p>There are more suitable methods for stiff, multi-scale problems (implicit time integration, preconditioning, multigrid) that remove those CFL constraints and alleviate the convergence problems (ill-conditioning) with large, high-resolution grids. They may be harder to parallelize, but they make those big problems more tractable.  I think most spectral/pseudo-spectral global circulation models (the summary mentions climate modeling) use some sort of implicit time-stepping at least.</p></div>
	</htmltext>
<tokenext>The programming techniques and mathematical formulations needed to take advantage of such very large number of processors continue to be the main stumbling blocks .
True . The CFL condition that limits the maximum time step one can take shows no sign of relenting .
Score has been Courant ( the C in CFL ) 1 , Moore 0 for the last three decades.There are more suitable methods for stiff , multi-scale problems ( implicit time integration , preconditioning , multigrid ) that remove those CFL constraints and alleviate the convergence problems ( ill-conditioning ) with large , high-resolution grids .
They may be harder to parallelize , but they make those big problems more tractable .
I think most spectral/pseudo-spectral global circulation models ( the summary mentions climate modeling ) use some sort of implicit time-stepping at least .</tokentext>
<sentencetext>The programming techniques and mathematical formulations needed to take advantage of such very large number of processors continue to be the main stumbling blocks.
True.
The CFL condition that limits the maximum time step one can take shows no sign of relenting.
Score has been Courant (the C in CFL) 1, Moore 0 for the last three decades.There are more suitable methods for stiff, multi-scale problems (implicit time integration, preconditioning, multigrid) that remove those CFL constraints and alleviate the convergence problems (ill-conditioning) with large, high-resolution grids.
They may be harder to parallelize, but they make those big problems more tractable.
I think most spectral/pseudo-spectral global circulation models (the summary mentions climate modeling) use some sort of implicit time-stepping at least.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119614</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119780</id>
	<title>Partly a software problem. Erlang?</title>
	<author>Anonymous</author>
	<datestamp>1258402920000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>We're still at the point where unthreaded languages (like PHP) are still viable. For example, we use PHP in a complex, multi-server, multi-core cluster, and it's "share nothing" approach scales quite nicely, in that having more and more users hitting the systemm on separate servers doesn't really cause a problem, since there's virtually no cross-communication going on.</p><p>But there's a scalability limit in what you can do "PER PROCESS". There are some very processor intensive functions that simply take a while to do (such as rendering a 100 page report, then converting to PDF) and there's currently no way to spread the load in PHP beyond a single core.</p><p>At the other extreme, we have almost the same problem - having such a large number of cores that resources commonly shared among threads and processes is really no longer feasible.</p><p>Languages like Erlang have a "shared nothing" approach, but not at the process/thread level, but at the function level. Individual functions within a process are themselves "share nothing" and thus can easily scale across multiple cores, processors, and servers in a networked cluster. (at least, this is the theory)</p><p>So how 'bout it, folks? Where are the benchmarks showing how languages DESIGNED to take advantage of parallel processors and clusters actually scale up in the real world? Is Erlang the cat's meow when discussing systems of this scale?</p><p>I'm not expecting to see my example process (100 page PDF reports) scale up smoothly to 250,000 cores, but I sure would like to see it scale up smoothly to a dozen or two!</p></htmltext>
<tokenext>We 're still at the point where unthreaded languages ( like PHP ) are still viable .
For example , we use PHP in a complex , multi-server , multi-core cluster , and it 's " share nothing " approach scales quite nicely , in that having more and more users hitting the systemm on separate servers does n't really cause a problem , since there 's virtually no cross-communication going on.But there 's a scalability limit in what you can do " PER PROCESS " .
There are some very processor intensive functions that simply take a while to do ( such as rendering a 100 page report , then converting to PDF ) and there 's currently no way to spread the load in PHP beyond a single core.At the other extreme , we have almost the same problem - having such a large number of cores that resources commonly shared among threads and processes is really no longer feasible.Languages like Erlang have a " shared nothing " approach , but not at the process/thread level , but at the function level .
Individual functions within a process are themselves " share nothing " and thus can easily scale across multiple cores , processors , and servers in a networked cluster .
( at least , this is the theory ) So how 'bout it , folks ?
Where are the benchmarks showing how languages DESIGNED to take advantage of parallel processors and clusters actually scale up in the real world ?
Is Erlang the cat 's meow when discussing systems of this scale ? I 'm not expecting to see my example process ( 100 page PDF reports ) scale up smoothly to 250,000 cores , but I sure would like to see it scale up smoothly to a dozen or two !</tokentext>
<sentencetext>We're still at the point where unthreaded languages (like PHP) are still viable.
For example, we use PHP in a complex, multi-server, multi-core cluster, and it's "share nothing" approach scales quite nicely, in that having more and more users hitting the systemm on separate servers doesn't really cause a problem, since there's virtually no cross-communication going on.But there's a scalability limit in what you can do "PER PROCESS".
There are some very processor intensive functions that simply take a while to do (such as rendering a 100 page report, then converting to PDF) and there's currently no way to spread the load in PHP beyond a single core.At the other extreme, we have almost the same problem - having such a large number of cores that resources commonly shared among threads and processes is really no longer feasible.Languages like Erlang have a "shared nothing" approach, but not at the process/thread level, but at the function level.
Individual functions within a process are themselves "share nothing" and thus can easily scale across multiple cores, processors, and servers in a networked cluster.
(at least, this is the theory)So how 'bout it, folks?
Where are the benchmarks showing how languages DESIGNED to take advantage of parallel processors and clusters actually scale up in the real world?
Is Erlang the cat's meow when discussing systems of this scale?I'm not expecting to see my example process (100 page PDF reports) scale up smoothly to 250,000 cores, but I sure would like to see it scale up smoothly to a dozen or two!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119596</id>
	<title>Sorry - I can't help myself</title>
	<author>RPGonAS400</author>
	<datestamp>1258402380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Can You Imagine a Beowulf Cluster of These?</htmltext>
<tokenext>Can You Imagine a Beowulf Cluster of These ?</tokentext>
<sentencetext>Can You Imagine a Beowulf Cluster of These?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30123142</id>
	<title>Re:Oink, oink</title>
	<author>DriedClexler</author>
	<datestamp>1258372320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Well, OBVIOUSLY it's to provide a basis for those cringe-inducing IBM ads where they talk in really dumbed-down terms about how they're going to make a "smarter planet" by making everything run "smarter" because IBM is going to throw a shitload of supercomputing power at it, and have smart- or foreign-sounding people talk about how great it is.  (Nevermind that e.g. the alleged traffic congestion reduction was due to peak-load pricing, not to the ability to crunch numbers at supercomputer speeds.)</p><p><a href="http://www.youtube.com/watch?v=nZPQeqAoydQ" title="youtube.com">Example.</a> [youtube.com]</p><p>Get with the program, man!<nobr> <wbr></nobr>;-)</p></htmltext>
<tokenext>Well , OBVIOUSLY it 's to provide a basis for those cringe-inducing IBM ads where they talk in really dumbed-down terms about how they 're going to make a " smarter planet " by making everything run " smarter " because IBM is going to throw a shitload of supercomputing power at it , and have smart- or foreign-sounding people talk about how great it is .
( Nevermind that e.g .
the alleged traffic congestion reduction was due to peak-load pricing , not to the ability to crunch numbers at supercomputer speeds. ) Example .
[ youtube.com ] Get with the program , man !
; - )</tokentext>
<sentencetext>Well, OBVIOUSLY it's to provide a basis for those cringe-inducing IBM ads where they talk in really dumbed-down terms about how they're going to make a "smarter planet" by making everything run "smarter" because IBM is going to throw a shitload of supercomputing power at it, and have smart- or foreign-sounding people talk about how great it is.
(Nevermind that e.g.
the alleged traffic congestion reduction was due to peak-load pricing, not to the ability to crunch numbers at supercomputer speeds.)Example.
[youtube.com]Get with the program, man!
;-)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119698</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119704</id>
	<title>AMD vs Intel</title>
	<author>teko\_teko</author>
	<datestamp>1258402740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It's interesting that 4 of top 5 supercomputers are running AMD, while 402 of the Top500 are running Intel.</p><p>What's the cause of this? Value? Energy-saving? Performance?</p></htmltext>
<tokenext>It 's interesting that 4 of top 5 supercomputers are running AMD , while 402 of the Top500 are running Intel.What 's the cause of this ?
Value ? Energy-saving ?
Performance ?</tokentext>
<sentencetext>It's interesting that 4 of top 5 supercomputers are running AMD, while 402 of the Top500 are running Intel.What's the cause of this?
Value? Energy-saving?
Performance?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121892</id>
	<title>Re:100 Million?</title>
	<author>wirelessbuzzers</author>
	<datestamp>1258367520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The Indians already have a <a href="http://en.wikipedia.org/wiki/Crore" title="wikipedia.org">word</a> [wikipedia.org] for 10 million cores.</p></htmltext>
<tokenext>The Indians already have a word [ wikipedia.org ] for 10 million cores .</tokentext>
<sentencetext>The Indians already have a word [wikipedia.org] for 10 million cores.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30124032</id>
	<title>Re:Limits on simulation.</title>
	<author>jstults</author>
	<datestamp>1258377000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><nobr> <wbr></nobr></p><div class="quote"><p>...you have to calculate the time steps in order.</p></div><p>Foiled again by that damned 2nd Law (shakes fist in air)!</p></div>
	</htmltext>
<tokenext>...you have to calculate the time steps in order.Foiled again by that damned 2nd Law ( shakes fist in air ) !</tokentext>
<sentencetext> ...you have to calculate the time steps in order.Foiled again by that damned 2nd Law (shakes fist in air)!
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122880</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119748</id>
	<title>The Jaguar?</title>
	<author>Yvan256</author>
	<datestamp>1258402860000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><blockquote><div><p>The Jaguar is capable of a peak performance of 2.3 petaflops.</p></div></blockquote><p>The first <a href="http://en.wikipedia.org/wiki/Atari\_Jaguar" title="wikipedia.org">Jaguar</a> [wikipedia.org] was a single megaflop.</p></div>
	</htmltext>
<tokenext>The Jaguar is capable of a peak performance of 2.3 petaflops.The first Jaguar [ wikipedia.org ] was a single megaflop .</tokentext>
<sentencetext>The Jaguar is capable of a peak performance of 2.3 petaflops.The first Jaguar [wikipedia.org] was a single megaflop.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119658</id>
	<title>Re:Who's President, Future-boy?</title>
	<author>Yvan256</author>
	<datestamp>1258402560000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>Forget the president, ask for the winning lottery numbers for the next 20 years!</p></htmltext>
<tokenext>Forget the president , ask for the winning lottery numbers for the next 20 years !</tokentext>
<sentencetext>Forget the president, ask for the winning lottery numbers for the next 20 years!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119430</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119960</id>
	<title>Speaking of heat</title>
	<author>Anonymous</author>
	<datestamp>1258403400000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext>I am currently accepting investors to help build a one billion core supercomputer to create high resolution climate models that take into account the waste heat from a 100 million core supercomputer making a high resolution climate model.<br>
<br>
(Seriously, how much heat is that thing going to put out?)</htmltext>
<tokenext>I am currently accepting investors to help build a one billion core supercomputer to create high resolution climate models that take into account the waste heat from a 100 million core supercomputer making a high resolution climate model .
( Seriously , how much heat is that thing going to put out ?
)</tokentext>
<sentencetext>I am currently accepting investors to help build a one billion core supercomputer to create high resolution climate models that take into account the waste heat from a 100 million core supercomputer making a high resolution climate model.
(Seriously, how much heat is that thing going to put out?
)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122868</id>
	<title>Re:100 Million?</title>
	<author>cashX3r0</author>
	<datestamp>1258371120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>its gonna take a great product like vista to take advantage of this multicore technology.</htmltext>
<tokenext>its gon na take a great product like vista to take advantage of this multicore technology .</tokentext>
<sentencetext>its gonna take a great product like vista to take advantage of this multicore technology.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122566</id>
	<title>Obligatory</title>
	<author>Anonymous</author>
	<datestamp>1258369680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>All your cores are belong to us!</htmltext>
<tokenext>All your cores are belong to us !</tokentext>
<sentencetext>All your cores are belong to us!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125246</id>
	<title>Wisdom of Experience</title>
	<author>jd</author>
	<datestamp>1258386960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Average human lifespan is 80 years. Assuming the author is roughly 30, then the author need only fear being ridiculed for underestimating the future for the next 50 years. Assume Moore's Law continues to hold and assume that scheduling problems constrain individual motherboards to 16x16 cores (16-way SMP is your limit, it's hard to imagine hardware inside the CPU is going to be any easier than hardware outside the CPU).</p><p>This means a desktop system in 50 years time can realistically expect to be limited to the equivalent of 256 cores that are each running at 100 terahertz. It won't be quite that architecture, but it should have that level of power.</p><p>It seems very wise to be scoffed at by a few people now and then hailed as a Visionary in his old age. It's not like anyone would give him anything now for being accurate, but Visionaries get their own TV shows, awards, endorsements - serious cash!</p><p>Far, far better to be a Visionary.</p></htmltext>
<tokenext>Average human lifespan is 80 years .
Assuming the author is roughly 30 , then the author need only fear being ridiculed for underestimating the future for the next 50 years .
Assume Moore 's Law continues to hold and assume that scheduling problems constrain individual motherboards to 16x16 cores ( 16-way SMP is your limit , it 's hard to imagine hardware inside the CPU is going to be any easier than hardware outside the CPU ) .This means a desktop system in 50 years time can realistically expect to be limited to the equivalent of 256 cores that are each running at 100 terahertz .
It wo n't be quite that architecture , but it should have that level of power.It seems very wise to be scoffed at by a few people now and then hailed as a Visionary in his old age .
It 's not like anyone would give him anything now for being accurate , but Visionaries get their own TV shows , awards , endorsements - serious cash ! Far , far better to be a Visionary .</tokentext>
<sentencetext>Average human lifespan is 80 years.
Assuming the author is roughly 30, then the author need only fear being ridiculed for underestimating the future for the next 50 years.
Assume Moore's Law continues to hold and assume that scheduling problems constrain individual motherboards to 16x16 cores (16-way SMP is your limit, it's hard to imagine hardware inside the CPU is going to be any easier than hardware outside the CPU).This means a desktop system in 50 years time can realistically expect to be limited to the equivalent of 256 cores that are each running at 100 terahertz.
It won't be quite that architecture, but it should have that level of power.It seems very wise to be scoffed at by a few people now and then hailed as a Visionary in his old age.
It's not like anyone would give him anything now for being accurate, but Visionaries get their own TV shows, awards, endorsements - serious cash!Far, far better to be a Visionary.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119430</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120714</id>
	<title>Re:Oink, oink</title>
	<author>T Murphy</author>
	<datestamp>1258362900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>These systems cost a lot- it might take buzzwords to get politicians to buy into them and fund these sorts of projects. Even so, many energy projects are important to pour more research into, even if such projects often get watered down to a single misleading buzzword.</htmltext>
<tokenext>These systems cost a lot- it might take buzzwords to get politicians to buy into them and fund these sorts of projects .
Even so , many energy projects are important to pour more research into , even if such projects often get watered down to a single misleading buzzword .</tokentext>
<sentencetext>These systems cost a lot- it might take buzzwords to get politicians to buy into them and fund these sorts of projects.
Even so, many energy projects are important to pour more research into, even if such projects often get watered down to a single misleading buzzword.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119698</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119750</id>
	<title>Re:Sorry - I can't help myself</title>
	<author>Anonymous</author>
	<datestamp>1258402860000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>Can You Imagine a Beowulf Cluster of These?</p></div><p>/me wanks furiously</p></div>
	</htmltext>
<tokenext>Can You Imagine a Beowulf Cluster of These ? /me wanks furiously</tokentext>
<sentencetext>Can You Imagine a Beowulf Cluster of These?/me wanks furiously
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119596</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119624</id>
	<title>How many problems can these systems really solve?</title>
	<author>wondi</author>
	<datestamp>1258402440000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext>All this effort at creating parallel computing ends up solving very few problems. HPC has been struggling with parallelism for decades, and no easy solutions found yet. Note that these computers are aimed at solving a particular problem (e.g. modeling weather) and not at being a vehicle to quickly solve any problem.
When the comparable multi-processing capacity is in your cell phone, what are you going to do with it?</htmltext>
<tokenext>All this effort at creating parallel computing ends up solving very few problems .
HPC has been struggling with parallelism for decades , and no easy solutions found yet .
Note that these computers are aimed at solving a particular problem ( e.g .
modeling weather ) and not at being a vehicle to quickly solve any problem .
When the comparable multi-processing capacity is in your cell phone , what are you going to do with it ?</tokentext>
<sentencetext>All this effort at creating parallel computing ends up solving very few problems.
HPC has been struggling with parallelism for decades, and no easy solutions found yet.
Note that these computers are aimed at solving a particular problem (e.g.
modeling weather) and not at being a vehicle to quickly solve any problem.
When the comparable multi-processing capacity is in your cell phone, what are you going to do with it?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120616</id>
	<title>Are all your cores used on your desktop?</title>
	<author>Ilgaz</author>
	<datestamp>1258362480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Check Wiki about "thinking machines", "transputer" and if you have more than 1 CPU/Core, launch a game and see if all cores used effectively without needing massive additional work from game publisher.</p><p>Technology is primitive, even a billion processor machine doesn't save it from being primitive. It is the software at least.</p></htmltext>
<tokenext>Check Wiki about " thinking machines " , " transputer " and if you have more than 1 CPU/Core , launch a game and see if all cores used effectively without needing massive additional work from game publisher.Technology is primitive , even a billion processor machine does n't save it from being primitive .
It is the software at least .</tokentext>
<sentencetext>Check Wiki about "thinking machines", "transputer" and if you have more than 1 CPU/Core, launch a game and see if all cores used effectively without needing massive additional work from game publisher.Technology is primitive, even a billion processor machine doesn't save it from being primitive.
It is the software at least.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119430</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120168</id>
	<title>Re:AMD vs Intel</title>
	<author>jamzo</author>
	<datestamp>1258404120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It's probably because of HyperTransport and floating point computation performance.  I think HyperTransport made it easier for supercomputer vendors like cray to build better interconnects and traditionally opterons was a bit better at floating point ops.  Also, opterons have 64Kb L1 caches where I think comparable Intel processors had 32Kb L1 caches.  But this was all a couple of years ago<nobr> <wbr></nobr>... the next generation of the fastest supercomputers will probably be Intel based.</p></htmltext>
<tokenext>It 's probably because of HyperTransport and floating point computation performance .
I think HyperTransport made it easier for supercomputer vendors like cray to build better interconnects and traditionally opterons was a bit better at floating point ops .
Also , opterons have 64Kb L1 caches where I think comparable Intel processors had 32Kb L1 caches .
But this was all a couple of years ago ... the next generation of the fastest supercomputers will probably be Intel based .</tokentext>
<sentencetext>It's probably because of HyperTransport and floating point computation performance.
I think HyperTransport made it easier for supercomputer vendors like cray to build better interconnects and traditionally opterons was a bit better at floating point ops.
Also, opterons have 64Kb L1 caches where I think comparable Intel processors had 32Kb L1 caches.
But this was all a couple of years ago ... the next generation of the fastest supercomputers will probably be Intel based.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119704</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120886</id>
	<title>Re:Speaking of heat</title>
	<author>blueg3</author>
	<datestamp>1258363620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>(Seriously, how much heat is that thing going to put out?)</p></div><p>As much energy as it consumes. For climate models, though, direct waste heat production is negligible compared to climatological effects (e.g., CO2).</p></div>
	</htmltext>
<tokenext>( Seriously , how much heat is that thing going to put out ?
) As much energy as it consumes .
For climate models , though , direct waste heat production is negligible compared to climatological effects ( e.g. , CO2 ) .</tokentext>
<sentencetext>(Seriously, how much heat is that thing going to put out?
)As much energy as it consumes.
For climate models, though, direct waste heat production is negligible compared to climatological effects (e.g., CO2).
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119960</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119600</id>
	<title>Re:100 Million?</title>
	<author>Jeremy Erwin</author>
	<datestamp>1258402380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The core? The surface? The corona?</p></htmltext>
<tokenext>The core ?
The surface ?
The corona ?</tokentext>
<sentencetext>The core?
The surface?
The corona?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122124</id>
	<title>Windows 15</title>
	<author>KaoticEvil</author>
	<datestamp>1258368180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>System Requirements<br>100Million Core CPU...<br>5 Gigabytes RAM<br>1 Petabyte of hard drive space...</p></div></blockquote><p>Yeah, I can see it..</p></div>
	</htmltext>
<tokenext>System Requirements100Million Core CPU...5 Gigabytes RAM1 Petabyte of hard drive space...Yeah , I can see it. .</tokentext>
<sentencetext>System Requirements100Million Core CPU...5 Gigabytes RAM1 Petabyte of hard drive space...Yeah, I can see it..
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30124756</id>
	<title>Re:human brain</title>
	<author>jwhitener</author>
	<datestamp>1258382220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You need about 10^14, or 100 teraflops, or 100 trillion calculations per second, in parallel.  That's the best guess for the raw computation power of the brain.</p><p>Now thats just matching it, with no software and little understanding of many parts of the brain.  Some people suggest that the brain has other stuff going on that might be very hard to figure out from current mapping/scanning techniques.  I forget his name, but one researcher suspects that there are quantum effects, maybe even quantum level calculations, happening inside tiny parts of the brain that are affecting the outcomes.  And other researchers are looking at the effect of signal strength in neuron firing.  So not only are there 100 billion neurons each connected in a network performing 100 trillion neuron firings per second, but those firings are of various strengths which means that it is conveying more information than just "fired" or "not fired".</p><p>I think thats how I remember a summary from some AI article.  It boils down to the brain most likely having orders of magnitude more calculations per second than Hans Morvec estimated.  And your computer running the simulation is going to have all the overhead of creating the virtual biology.  That is, unless we can really understand what is important to the brain and what is not.  You can simulate an analog signal in a digital computer, but there is overhead involved.  100 trillion analog signals means your computer simulation is going to be doing way more than 100 trillion calculations.</p></htmltext>
<tokenext>You need about 10 ^ 14 , or 100 teraflops , or 100 trillion calculations per second , in parallel .
That 's the best guess for the raw computation power of the brain.Now thats just matching it , with no software and little understanding of many parts of the brain .
Some people suggest that the brain has other stuff going on that might be very hard to figure out from current mapping/scanning techniques .
I forget his name , but one researcher suspects that there are quantum effects , maybe even quantum level calculations , happening inside tiny parts of the brain that are affecting the outcomes .
And other researchers are looking at the effect of signal strength in neuron firing .
So not only are there 100 billion neurons each connected in a network performing 100 trillion neuron firings per second , but those firings are of various strengths which means that it is conveying more information than just " fired " or " not fired " .I think thats how I remember a summary from some AI article .
It boils down to the brain most likely having orders of magnitude more calculations per second than Hans Morvec estimated .
And your computer running the simulation is going to have all the overhead of creating the virtual biology .
That is , unless we can really understand what is important to the brain and what is not .
You can simulate an analog signal in a digital computer , but there is overhead involved .
100 trillion analog signals means your computer simulation is going to be doing way more than 100 trillion calculations .</tokentext>
<sentencetext>You need about 10^14, or 100 teraflops, or 100 trillion calculations per second, in parallel.
That's the best guess for the raw computation power of the brain.Now thats just matching it, with no software and little understanding of many parts of the brain.
Some people suggest that the brain has other stuff going on that might be very hard to figure out from current mapping/scanning techniques.
I forget his name, but one researcher suspects that there are quantum effects, maybe even quantum level calculations, happening inside tiny parts of the brain that are affecting the outcomes.
And other researchers are looking at the effect of signal strength in neuron firing.
So not only are there 100 billion neurons each connected in a network performing 100 trillion neuron firings per second, but those firings are of various strengths which means that it is conveying more information than just "fired" or "not fired".I think thats how I remember a summary from some AI article.
It boils down to the brain most likely having orders of magnitude more calculations per second than Hans Morvec estimated.
And your computer running the simulation is going to have all the overhead of creating the virtual biology.
That is, unless we can really understand what is important to the brain and what is not.
You can simulate an analog signal in a digital computer, but there is overhead involved.
100 trillion analog signals means your computer simulation is going to be doing way more than 100 trillion calculations.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119830</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125726</id>
	<title>Re:Limits on simulation.</title>
	<author>dontmakemethink</author>
	<datestamp>1258392300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>The CFL condition that limits the maximum time step one can take shows no sign of relenting.</p></div><p>You mean they're STILL punting on 3rd down?!</p></div>
	</htmltext>
<tokenext>The CFL condition that limits the maximum time step one can take shows no sign of relenting.You mean they 're STILL punting on 3rd down ?
!</tokentext>
<sentencetext>The CFL condition that limits the maximum time step one can take shows no sign of relenting.You mean they're STILL punting on 3rd down?
!
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119614</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30123560</id>
	<title>Boom, boom</title>
	<author>AlpineR</author>
	<datestamp>1258374300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>It's striking how few supercomputers are sold to commercial companies. Even the military doesn't use them much any more.</p></div></blockquote><p>Oak Ridge National Laboratory, home to the world's fastest supercomputer, does a lot of work for <a href="http://www.ornl.gov/ornlhome/national\_security.shtml" title="ornl.gov">national security</a> [ornl.gov]. At the labs housing the top ten supercomputers, at least five do weapons and defense research. And that's just what the public knows about. I would be shocked if there weren't similar supercomputers working on intelligence and classified projects.</p><p>Even if the computers aren't stamped with "U.S. Army", the military does indeed use many of them. The wonderful side effect of their push to simulate things like aging nuclear weapons is that it helps develop the technology for peacetime purposes like renewable energy and pharmaceuticals.</p></div>
	</htmltext>
<tokenext>It 's striking how few supercomputers are sold to commercial companies .
Even the military does n't use them much any more.Oak Ridge National Laboratory , home to the world 's fastest supercomputer , does a lot of work for national security [ ornl.gov ] .
At the labs housing the top ten supercomputers , at least five do weapons and defense research .
And that 's just what the public knows about .
I would be shocked if there were n't similar supercomputers working on intelligence and classified projects.Even if the computers are n't stamped with " U.S. Army " , the military does indeed use many of them .
The wonderful side effect of their push to simulate things like aging nuclear weapons is that it helps develop the technology for peacetime purposes like renewable energy and pharmaceuticals .</tokentext>
<sentencetext>It's striking how few supercomputers are sold to commercial companies.
Even the military doesn't use them much any more.Oak Ridge National Laboratory, home to the world's fastest supercomputer, does a lot of work for national security [ornl.gov].
At the labs housing the top ten supercomputers, at least five do weapons and defense research.
And that's just what the public knows about.
I would be shocked if there weren't similar supercomputers working on intelligence and classified projects.Even if the computers aren't stamped with "U.S. Army", the military does indeed use many of them.
The wonderful side effect of their push to simulate things like aging nuclear weapons is that it helps develop the technology for peacetime purposes like renewable energy and pharmaceuticals.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119698</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120642</id>
	<title>Re:Who's President, Future-boy?</title>
	<author>Anonymous</author>
	<datestamp>1258362600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I think of our supercomputing systems as primitive in an analogous way as cavemen wouldn't end up with a rocket thruster if they just throw enough logs on a fire.</p><p>Without more advanced software designs and some type of revolutionary system architecture, more cores ends up only being slightly better than linear progression.  They're primitive in that our supercomputers are seldom more than the sum of their parts.</p></htmltext>
<tokenext>I think of our supercomputing systems as primitive in an analogous way as cavemen would n't end up with a rocket thruster if they just throw enough logs on a fire.Without more advanced software designs and some type of revolutionary system architecture , more cores ends up only being slightly better than linear progression .
They 're primitive in that our supercomputers are seldom more than the sum of their parts .</tokentext>
<sentencetext>I think of our supercomputing systems as primitive in an analogous way as cavemen wouldn't end up with a rocket thruster if they just throw enough logs on a fire.Without more advanced software designs and some type of revolutionary system architecture, more cores ends up only being slightly better than linear progression.
They're primitive in that our supercomputers are seldom more than the sum of their parts.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119430</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121972</id>
	<title>Re:100 Million?</title>
	<author>Anonymous</author>
	<datestamp>1258367700000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>Like describing how hot the Sun is....let's just says it's "exactly 1 Sun hot".</p></div><p>Dude 1: "She's soooo hot!"<br>Dude 2: "Yeah, Man, she's 1 Sun hot"<br>Dude 1: "No way, she's at least 2 Suns"</p></div>
	</htmltext>
<tokenext>Like describing how hot the Sun is....let 's just says it 's " exactly 1 Sun hot " .Dude 1 : " She 's soooo hot !
" Dude 2 : " Yeah , Man , she 's 1 Sun hot " Dude 1 : " No way , she 's at least 2 Suns "</tokentext>
<sentencetext>Like describing how hot the Sun is....let's just says it's "exactly 1 Sun hot".Dude 1: "She's soooo hot!
"Dude 2: "Yeah, Man, she's 1 Sun hot"Dude 1: "No way, she's at least 2 Suns"
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120134</id>
	<title>Sure</title>
	<author>Stan Vassilev</author>
	<datestamp>1258404060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I'm still waiting for that 10GHz Pentium Intel promised for 2004.</htmltext>
<tokenext>I 'm still waiting for that 10GHz Pentium Intel promised for 2004 .</tokentext>
<sentencetext>I'm still waiting for that 10GHz Pentium Intel promised for 2004.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119698</id>
	<title>Oink, oink</title>
	<author>Anonymous</author>
	<datestamp>1258402740000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>
<i> The exascale systems will be needed for high-resolution climate models, bio energy products and smart grid development as well as fusion energy design.</i>
</p><p>
Sounds like a pork program.  What are "bio energy products", anyway.  Ethanol?  Supercomputer proposals seem to come with whatever buzzword is hot this year.
</p><p>
It's striking how few supercomputers are sold to commercial companies.   Even the military doesn't use them much any more.</p></htmltext>
<tokenext>The exascale systems will be needed for high-resolution climate models , bio energy products and smart grid development as well as fusion energy design .
Sounds like a pork program .
What are " bio energy products " , anyway .
Ethanol ? Supercomputer proposals seem to come with whatever buzzword is hot this year .
It 's striking how few supercomputers are sold to commercial companies .
Even the military does n't use them much any more .</tokentext>
<sentencetext>
 The exascale systems will be needed for high-resolution climate models, bio energy products and smart grid development as well as fusion energy design.
Sounds like a pork program.
What are "bio energy products", anyway.
Ethanol?  Supercomputer proposals seem to come with whatever buzzword is hot this year.
It's striking how few supercomputers are sold to commercial companies.
Even the military doesn't use them much any more.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30129884</id>
	<title>Re:human brain</title>
	<author>haxor.dk</author>
	<datestamp>1258477560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Slightly over 9000.</p></htmltext>
<tokenext>Slightly over 9000 .</tokentext>
<sentencetext>Slightly over 9000.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119830</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119800</id>
	<title>creators' big flash coming way before 2018</title>
	<author>Anonymous</author>
	<datestamp>1258402980000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>that will settle who 'owns' what forever.</p><p>it's newclear powered, user friendly, &amp; completely bug free, as well free to use, forever.</p></htmltext>
<tokenext>that will settle who 'owns ' what forever.it 's newclear powered , user friendly , &amp; completely bug free , as well free to use , forever .</tokentext>
<sentencetext>that will settle who 'owns' what forever.it's newclear powered, user friendly, &amp; completely bug free, as well free to use, forever.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30127108</id>
	<title>What the hell...</title>
	<author>tengeta</author>
	<datestamp>1258453740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Chances are just like 10 years ago, the architecture won't even be the same. This is useless, 100 million cores are only needed by google for its porn searches.</htmltext>
<tokenext>Chances are just like 10 years ago , the architecture wo n't even be the same .
This is useless , 100 million cores are only needed by google for its porn searches .</tokentext>
<sentencetext>Chances are just like 10 years ago, the architecture won't even be the same.
This is useless, 100 million cores are only needed by google for its porn searches.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125350</id>
	<title>Re:Speaking of heat</title>
	<author>turing\_m</author>
	<datestamp>1258387800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>(Seriously, how much heat is that thing going to put out?)</p></div></blockquote><p>
Your joke was funny. To answer your serious question, probably less heat than you might think. For example, the i5-750 with the so-called "turbo boost" uses one or two cores running fast and taking up all the power when that is called for, or potentially all 4 cores running within the same power envelope if there is something that can use 4 cores. That's 4 cores doing much more work than 1, and using the same amount of power. More cores running slower must continue scaling, otherwise they wouldn't be doing it.</p></div>
	</htmltext>
<tokenext>( Seriously , how much heat is that thing going to put out ?
) Your joke was funny .
To answer your serious question , probably less heat than you might think .
For example , the i5-750 with the so-called " turbo boost " uses one or two cores running fast and taking up all the power when that is called for , or potentially all 4 cores running within the same power envelope if there is something that can use 4 cores .
That 's 4 cores doing much more work than 1 , and using the same amount of power .
More cores running slower must continue scaling , otherwise they would n't be doing it .</tokentext>
<sentencetext>(Seriously, how much heat is that thing going to put out?
)
Your joke was funny.
To answer your serious question, probably less heat than you might think.
For example, the i5-750 with the so-called "turbo boost" uses one or two cores running fast and taking up all the power when that is called for, or potentially all 4 cores running within the same power envelope if there is something that can use 4 cores.
That's 4 cores doing much more work than 1, and using the same amount of power.
More cores running slower must continue scaling, otherwise they wouldn't be doing it.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119960</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120444</id>
	<title>Re:Partly a software problem. Erlang?</title>
	<author>vlm</author>
	<datestamp>1258404960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>I'm not expecting to see my example process (100 page PDF reports) scale up smoothly to 250,000 cores, but I sure would like to see it scale up smoothly to a dozen or two!</p></div><p>Well, that's not very hard.  Split the job like the ray tracers do, into 250K little parts of the 100 page report, have each core individually render its little bit, then mush all the rendered outputs together.</p><p>You could do this now, more or less off the shelf, by separating your raw data into 100 raw input files, one for each page, then have 100 machines or cores or whatever render each separate page, then a big run of pdfjoin to turn 100 single page pdfs into one 100 page pdf.</p></div>
	</htmltext>
<tokenext>I 'm not expecting to see my example process ( 100 page PDF reports ) scale up smoothly to 250,000 cores , but I sure would like to see it scale up smoothly to a dozen or two ! Well , that 's not very hard .
Split the job like the ray tracers do , into 250K little parts of the 100 page report , have each core individually render its little bit , then mush all the rendered outputs together.You could do this now , more or less off the shelf , by separating your raw data into 100 raw input files , one for each page , then have 100 machines or cores or whatever render each separate page , then a big run of pdfjoin to turn 100 single page pdfs into one 100 page pdf .</tokentext>
<sentencetext>I'm not expecting to see my example process (100 page PDF reports) scale up smoothly to 250,000 cores, but I sure would like to see it scale up smoothly to a dozen or two!Well, that's not very hard.
Split the job like the ray tracers do, into 250K little parts of the 100 page report, have each core individually render its little bit, then mush all the rendered outputs together.You could do this now, more or less off the shelf, by separating your raw data into 100 raw input files, one for each page, then have 100 machines or cores or whatever render each separate page, then a big run of pdfjoin to turn 100 single page pdfs into one 100 page pdf.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119780</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120212</id>
	<title>Coming By 2012</title>
	<author>Anonymous</author>
	<datestamp>1258404300000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Republican reign for the next 1000 years after Obama fails.</p><p>Yours In Moscow,<br>Kilgore T.</p></htmltext>
<tokenext>Republican reign for the next 1000 years after Obama fails.Yours In Moscow,Kilgore T .</tokentext>
<sentencetext>Republican reign for the next 1000 years after Obama fails.Yours In Moscow,Kilgore T.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119430</id>
	<title>Who's President, Future-boy?</title>
	<author>pete-classic</author>
	<datestamp>1258401840000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><blockquote><div><p>As amazing as today's supercomputing systems are, they remain primitive</p></div></blockquote><p>Wait, what?  You lost me.  Are you from the future?  How can you describe the state of the art as "primitive"?</p><p>-Peter</p></div>
	</htmltext>
<tokenext>As amazing as today 's supercomputing systems are , they remain primitiveWait , what ?
You lost me .
Are you from the future ?
How can you describe the state of the art as " primitive " ? -Peter</tokentext>
<sentencetext>As amazing as today's supercomputing systems are, they remain primitiveWait, what?
You lost me.
Are you from the future?
How can you describe the state of the art as "primitive"?-Peter
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120006</id>
	<title>Re:How many problems can these systems really solv</title>
	<author>David Greene</author>
	<datestamp>1258403580000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>Note that these computers are aimed at solving a particular problem (e.g. modeling weather) and not at being a vehicle to quickly solve any problem.</p></div><p>
That's not entirely accurate.  HPC systems are designed to solve a class of problems.  That's not the same thing as a "particular" problem.  Jaguar has, in fact, solved many different problems, including fluid flow, weather, nuclear fusion and supernova modeling.  It's not going to run Word any faster than your PC but that's not what you buy a supercomputer to do.
</p></div>
	</htmltext>
<tokenext>Note that these computers are aimed at solving a particular problem ( e.g .
modeling weather ) and not at being a vehicle to quickly solve any problem .
That 's not entirely accurate .
HPC systems are designed to solve a class of problems .
That 's not the same thing as a " particular " problem .
Jaguar has , in fact , solved many different problems , including fluid flow , weather , nuclear fusion and supernova modeling .
It 's not going to run Word any faster than your PC but that 's not what you buy a supercomputer to do .</tokentext>
<sentencetext>Note that these computers are aimed at solving a particular problem (e.g.
modeling weather) and not at being a vehicle to quickly solve any problem.
That's not entirely accurate.
HPC systems are designed to solve a class of problems.
That's not the same thing as a "particular" problem.
Jaguar has, in fact, solved many different problems, including fluid flow, weather, nuclear fusion and supernova modeling.
It's not going to run Word any faster than your PC but that's not what you buy a supercomputer to do.

	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119624</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420</id>
	<title>100 Million?</title>
	<author>Anonymous</author>
	<datestamp>1258401780000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext>Can't we just start calling this a 'supercore' or something? When the numbers get that high it kind of goes beyond what most people can visualize. Like describing how hot the Sun is....let's just says it's "exactly 1 Sun hot".</htmltext>
<tokenext>Ca n't we just start calling this a 'supercore ' or something ?
When the numbers get that high it kind of goes beyond what most people can visualize .
Like describing how hot the Sun is....let 's just says it 's " exactly 1 Sun hot " .</tokentext>
<sentencetext>Can't we just start calling this a 'supercore' or something?
When the numbers get that high it kind of goes beyond what most people can visualize.
Like describing how hot the Sun is....let's just says it's "exactly 1 Sun hot".</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120304</id>
	<title>How about reconfigurable computing instead?</title>
	<author>Anonymous</author>
	<datestamp>1258404540000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>...Such as, say, FPGAs (www.maxeler.com) or GPUs (www.ati.com)<br>One such accelerator card can replace ~ 100 cores for common applications such as finite differences or MonteCarlo<nobr> <wbr></nobr>...<br>Hence you would need "only" a million blades.</p></htmltext>
<tokenext>...Such as , say , FPGAs ( www.maxeler.com ) or GPUs ( www.ati.com ) One such accelerator card can replace ~ 100 cores for common applications such as finite differences or MonteCarlo ...Hence you would need " only " a million blades .</tokentext>
<sentencetext>...Such as, say, FPGAs (www.maxeler.com) or GPUs (www.ati.com)One such accelerator card can replace ~ 100 cores for common applications such as finite differences or MonteCarlo ...Hence you would need "only" a million blades.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30124666</id>
	<title>But...</title>
	<author>TheOV</author>
	<datestamp>1258381380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>can it run Crysis???</htmltext>
<tokenext>can it run Crysis ? ?
?</tokentext>
<sentencetext>can it run Crysis??
?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119978</id>
	<title>Re:Who's President, Future-boy?</title>
	<author>Z00L00K</author>
	<datestamp>1258403460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You can still predict that some tech is primitive.</p><p>When a computer develops a mind of it's own in a logical manner it's starting to reach the human level and we can start to discuss if it's primitive or not. If it starts to reproduce on it's own it's time to be careful.</p></htmltext>
<tokenext>You can still predict that some tech is primitive.When a computer develops a mind of it 's own in a logical manner it 's starting to reach the human level and we can start to discuss if it 's primitive or not .
If it starts to reproduce on it 's own it 's time to be careful .</tokentext>
<sentencetext>You can still predict that some tech is primitive.When a computer develops a mind of it's own in a logical manner it's starting to reach the human level and we can start to discuss if it's primitive or not.
If it starts to reproduce on it's own it's time to be careful.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119430</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30124432</id>
	<title>Re:100 Million?</title>
	<author>Nefarious Wheel</author>
	<datestamp>1258379640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I would suspect Heuristically programmed ALgorithmic processor.  When you get 9000 cores together, anyway.<p>I'm sorry, Dave...</p></htmltext>
<tokenext>I would suspect Heuristically programmed ALgorithmic processor .
When you get 9000 cores together , anyway.I 'm sorry , Dave.. .</tokentext>
<sentencetext>I would suspect Heuristically programmed ALgorithmic processor.
When you get 9000 cores together, anyway.I'm sorry, Dave...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119944</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125154</id>
	<title>Re:Speaking of heat</title>
	<author>jd</author>
	<datestamp>1258386120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It depends on what sort of cores are used. Most real problems are not SIMD but MIMD. You can turn a MIMD problem into a very large set of SIMD problems, though, where each one of those SIMD problems is actually extremely narrow and does not require a completely general-purpose core.</p><p>What you end up with is a hybrid of highly specialized cores, each very tightly tuned to some specific sub-category of problem. Since it's tuned, it can be ultra-RISC (it might not even need to be Turing Complete in some cases).</p><p>Since the critical path determines deadlines, and most processes will be OFF the critical path, you can also have asynchronous cores. The University of Manchester's AMULET group has been working on asynchronous computing for a while and has some nice technology, including open-source tools for making asynchronous chips.</p><p>Switching is another big heat-generator, since extreme computing like this really needs either a butterfly topology or a hypercube topology. In either case, IIRC, the number of switches goes up with the power of the number of nodes. At 100 million nodes, you'd be looking at tens of quadrillions of switches. Even if each switch is relatively lightweight on power, the sheer number makes the total consumption significant.</p><p>However, it's not all bad. Any collective operation can be delivered via NACK-oriented reliable multicast, so it's a single-shot delivery to as many nodes as you like.</p><p>Since supercomputing IS all about critical paths and deadlines, it should be possible to use delay tolerant networking protocols and have switches deliver as much in bulk as possible to minimize processing needed except on the periphery of the network. So long as packets get to their destinations by the deadlines, it makes no difference to the nodes whether they get the data then block on a barrier operation or whether they get the data just as the barrier operation is due to finish.</p><p>So there's lots of ways to cut down on the heat on the hardware side. What about the software side? Well, competent programmers are often a good start. A million cores doesn't mean it's ok to be a million times less efficient.</p><p>Code should be tightly written and customized for the specific hardware (including network) characteristics. It should then be profiled, baked, broiled and fried until light and crispy. The programmers should then refactor the code until it's done right.</p><p>The theoretical minimum heat output from a hybrid asynchronous computer is the heat you'd generate if one core is one processing element with enough memory to perform its task. You can't get any more RISC than a single instruction. The theoretical minimum heat output from a deliberately delaying network is the heat you'd generate in queueing and bulk-delivering data such that nodes are either off or busy, they're never idle on the offchance of data coming soon.</p><p>Idle nodes are the really bad nodes. If a node cannot be used right then and is off, it consumes nothing. A node is most likely to be idle if there is simply no way of telling if it'll be assigned work or not.</p><p>The next-worst are nodes that do things too fast. Doing things too fast means the memory has to stay powered to preserve the results but nothing can consume the results any time soon. Power consumed is not directly proportional to processor speed. It's not linear. If you detune the processor such that it always has the data ready just in time to be consumed, you will always use less power than storing.</p><p>The upshot of this is your input buffer wants to be very large (so you can bulk-receive) but the output buffer wants to be comparatively small (because you are generating results when they're needed and no sooner).</p><p>If you're really smart, you'd make the input buffer high-bandwidth and the output buffer high-speed. If you're generating just-in-time, the last thing you want is for the output to be slow in being forwarded on. On the other hand, data will be dropped into the input far faster than it can be consumed (since it's a bulk delive</p></htmltext>
<tokenext>It depends on what sort of cores are used .
Most real problems are not SIMD but MIMD .
You can turn a MIMD problem into a very large set of SIMD problems , though , where each one of those SIMD problems is actually extremely narrow and does not require a completely general-purpose core.What you end up with is a hybrid of highly specialized cores , each very tightly tuned to some specific sub-category of problem .
Since it 's tuned , it can be ultra-RISC ( it might not even need to be Turing Complete in some cases ) .Since the critical path determines deadlines , and most processes will be OFF the critical path , you can also have asynchronous cores .
The University of Manchester 's AMULET group has been working on asynchronous computing for a while and has some nice technology , including open-source tools for making asynchronous chips.Switching is another big heat-generator , since extreme computing like this really needs either a butterfly topology or a hypercube topology .
In either case , IIRC , the number of switches goes up with the power of the number of nodes .
At 100 million nodes , you 'd be looking at tens of quadrillions of switches .
Even if each switch is relatively lightweight on power , the sheer number makes the total consumption significant.However , it 's not all bad .
Any collective operation can be delivered via NACK-oriented reliable multicast , so it 's a single-shot delivery to as many nodes as you like.Since supercomputing IS all about critical paths and deadlines , it should be possible to use delay tolerant networking protocols and have switches deliver as much in bulk as possible to minimize processing needed except on the periphery of the network .
So long as packets get to their destinations by the deadlines , it makes no difference to the nodes whether they get the data then block on a barrier operation or whether they get the data just as the barrier operation is due to finish.So there 's lots of ways to cut down on the heat on the hardware side .
What about the software side ?
Well , competent programmers are often a good start .
A million cores does n't mean it 's ok to be a million times less efficient.Code should be tightly written and customized for the specific hardware ( including network ) characteristics .
It should then be profiled , baked , broiled and fried until light and crispy .
The programmers should then refactor the code until it 's done right.The theoretical minimum heat output from a hybrid asynchronous computer is the heat you 'd generate if one core is one processing element with enough memory to perform its task .
You ca n't get any more RISC than a single instruction .
The theoretical minimum heat output from a deliberately delaying network is the heat you 'd generate in queueing and bulk-delivering data such that nodes are either off or busy , they 're never idle on the offchance of data coming soon.Idle nodes are the really bad nodes .
If a node can not be used right then and is off , it consumes nothing .
A node is most likely to be idle if there is simply no way of telling if it 'll be assigned work or not.The next-worst are nodes that do things too fast .
Doing things too fast means the memory has to stay powered to preserve the results but nothing can consume the results any time soon .
Power consumed is not directly proportional to processor speed .
It 's not linear .
If you detune the processor such that it always has the data ready just in time to be consumed , you will always use less power than storing.The upshot of this is your input buffer wants to be very large ( so you can bulk-receive ) but the output buffer wants to be comparatively small ( because you are generating results when they 're needed and no sooner ) .If you 're really smart , you 'd make the input buffer high-bandwidth and the output buffer high-speed .
If you 're generating just-in-time , the last thing you want is for the output to be slow in being forwarded on .
On the other hand , data will be dropped into the input far faster than it can be consumed ( since it 's a bulk delive</tokentext>
<sentencetext>It depends on what sort of cores are used.
Most real problems are not SIMD but MIMD.
You can turn a MIMD problem into a very large set of SIMD problems, though, where each one of those SIMD problems is actually extremely narrow and does not require a completely general-purpose core.What you end up with is a hybrid of highly specialized cores, each very tightly tuned to some specific sub-category of problem.
Since it's tuned, it can be ultra-RISC (it might not even need to be Turing Complete in some cases).Since the critical path determines deadlines, and most processes will be OFF the critical path, you can also have asynchronous cores.
The University of Manchester's AMULET group has been working on asynchronous computing for a while and has some nice technology, including open-source tools for making asynchronous chips.Switching is another big heat-generator, since extreme computing like this really needs either a butterfly topology or a hypercube topology.
In either case, IIRC, the number of switches goes up with the power of the number of nodes.
At 100 million nodes, you'd be looking at tens of quadrillions of switches.
Even if each switch is relatively lightweight on power, the sheer number makes the total consumption significant.However, it's not all bad.
Any collective operation can be delivered via NACK-oriented reliable multicast, so it's a single-shot delivery to as many nodes as you like.Since supercomputing IS all about critical paths and deadlines, it should be possible to use delay tolerant networking protocols and have switches deliver as much in bulk as possible to minimize processing needed except on the periphery of the network.
So long as packets get to their destinations by the deadlines, it makes no difference to the nodes whether they get the data then block on a barrier operation or whether they get the data just as the barrier operation is due to finish.So there's lots of ways to cut down on the heat on the hardware side.
What about the software side?
Well, competent programmers are often a good start.
A million cores doesn't mean it's ok to be a million times less efficient.Code should be tightly written and customized for the specific hardware (including network) characteristics.
It should then be profiled, baked, broiled and fried until light and crispy.
The programmers should then refactor the code until it's done right.The theoretical minimum heat output from a hybrid asynchronous computer is the heat you'd generate if one core is one processing element with enough memory to perform its task.
You can't get any more RISC than a single instruction.
The theoretical minimum heat output from a deliberately delaying network is the heat you'd generate in queueing and bulk-delivering data such that nodes are either off or busy, they're never idle on the offchance of data coming soon.Idle nodes are the really bad nodes.
If a node cannot be used right then and is off, it consumes nothing.
A node is most likely to be idle if there is simply no way of telling if it'll be assigned work or not.The next-worst are nodes that do things too fast.
Doing things too fast means the memory has to stay powered to preserve the results but nothing can consume the results any time soon.
Power consumed is not directly proportional to processor speed.
It's not linear.
If you detune the processor such that it always has the data ready just in time to be consumed, you will always use less power than storing.The upshot of this is your input buffer wants to be very large (so you can bulk-receive) but the output buffer wants to be comparatively small (because you are generating results when they're needed and no sooner).If you're really smart, you'd make the input buffer high-bandwidth and the output buffer high-speed.
If you're generating just-in-time, the last thing you want is for the output to be slow in being forwarded on.
On the other hand, data will be dropped into the input far faster than it can be consumed (since it's a bulk delive</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119960</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120416</id>
	<title>Colon Blow</title>
	<author>Beelzebud</author>
	<datestamp>1258404900000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>To put this into perspective, it would take over 4 and a half million bowls of Super Colon Blow to equal the computation power of just 1 of these things!</htmltext>
<tokenext>To put this into perspective , it would take over 4 and a half million bowls of Super Colon Blow to equal the computation power of just 1 of these things !</tokentext>
<sentencetext>To put this into perspective, it would take over 4 and a half million bowls of Super Colon Blow to equal the computation power of just 1 of these things!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30134098</id>
	<title>Re:human brain</title>
	<author>Anonymous</author>
	<datestamp>1258449960000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>the amount of transistors is the least of our problems. we need a complex organization of those units in a way that we cannot fathom, and are not likely to be able to any time soon.</p></htmltext>
<tokenext>the amount of transistors is the least of our problems .
we need a complex organization of those units in a way that we can not fathom , and are not likely to be able to any time soon .</tokentext>
<sentencetext>the amount of transistors is the least of our problems.
we need a complex organization of those units in a way that we cannot fathom, and are not likely to be able to any time soon.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119830</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120452</id>
	<title>Re:Oink, oink</title>
	<author>vtcodger</author>
	<datestamp>1258405020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Isn't quantum computing supposed to solve all these problems without need for a zillion cores?  Or have a latched onto the wrong panacea here?</p></htmltext>
<tokenext>Is n't quantum computing supposed to solve all these problems without need for a zillion cores ?
Or have a latched onto the wrong panacea here ?</tokentext>
<sentencetext>Isn't quantum computing supposed to solve all these problems without need for a zillion cores?
Or have a latched onto the wrong panacea here?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119698</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119944</id>
	<title>Re:100 Million?</title>
	<author>Stargoat</author>
	<datestamp>1258403340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I suspect that we will end up calling this a heuristically designed processor.  Or something similar....</htmltext>
<tokenext>I suspect that we will end up calling this a heuristically designed processor .
Or something similar... .</tokentext>
<sentencetext>I suspect that we will end up calling this a heuristically designed processor.
Or something similar....</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119870</id>
	<title>Re:Who's President, Future-boy?</title>
	<author>Anonymous</author>
	<datestamp>1258403160000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext><p>My cell phone is a supercomputer. At least, it would have been if I'd had it in 1972. Rather then being from the future, he, like me, is from the past and living in this science fiction future when all that fantasy stuff like doors that open by themselves, rockets to space, phones that need no wires and fit in your pocket, computers on your desk, ovens that bake a potato in three minutes without the oven getting hot, flat screen TVs that aren't round at the corners, eye implants that cure nearsightedness, farsightedness, astigmatism and cataracts all at once, etc.</p><p>Back when I was young it didn't seem primitive at all. Looking back, GEES. When you went to the hospital they knocked you out with automotive starting fluid and left scars eight inches wide. These days they say "you're going to sleep now" and you blink and find yourself in the recovery room, feeling no pain or nausea with a tiny scar.</p><p>We are indeed living in primitive times. Back in the 1870s a man quit the Patent office on the grounds that everything useful had already been invented. If you're young enough you're going to see things that you couldn't imagine, or at least couldn't believe possible.</p><p><a href="http://slashdot.org/~mcgrew/journal/229093" title="slashdot.org">Sickness, pain, and death. And Star Trek.</a> [slashdot.org]</p></htmltext>
<tokenext>My cell phone is a supercomputer .
At least , it would have been if I 'd had it in 1972 .
Rather then being from the future , he , like me , is from the past and living in this science fiction future when all that fantasy stuff like doors that open by themselves , rockets to space , phones that need no wires and fit in your pocket , computers on your desk , ovens that bake a potato in three minutes without the oven getting hot , flat screen TVs that are n't round at the corners , eye implants that cure nearsightedness , farsightedness , astigmatism and cataracts all at once , etc.Back when I was young it did n't seem primitive at all .
Looking back , GEES .
When you went to the hospital they knocked you out with automotive starting fluid and left scars eight inches wide .
These days they say " you 're going to sleep now " and you blink and find yourself in the recovery room , feeling no pain or nausea with a tiny scar.We are indeed living in primitive times .
Back in the 1870s a man quit the Patent office on the grounds that everything useful had already been invented .
If you 're young enough you 're going to see things that you could n't imagine , or at least could n't believe possible.Sickness , pain , and death .
And Star Trek .
[ slashdot.org ]</tokentext>
<sentencetext>My cell phone is a supercomputer.
At least, it would have been if I'd had it in 1972.
Rather then being from the future, he, like me, is from the past and living in this science fiction future when all that fantasy stuff like doors that open by themselves, rockets to space, phones that need no wires and fit in your pocket, computers on your desk, ovens that bake a potato in three minutes without the oven getting hot, flat screen TVs that aren't round at the corners, eye implants that cure nearsightedness, farsightedness, astigmatism and cataracts all at once, etc.Back when I was young it didn't seem primitive at all.
Looking back, GEES.
When you went to the hospital they knocked you out with automotive starting fluid and left scars eight inches wide.
These days they say "you're going to sleep now" and you blink and find yourself in the recovery room, feeling no pain or nausea with a tiny scar.We are indeed living in primitive times.
Back in the 1870s a man quit the Patent office on the grounds that everything useful had already been invented.
If you're young enough you're going to see things that you couldn't imagine, or at least couldn't believe possible.Sickness, pain, and death.
And Star Trek.
[slashdot.org]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119430</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120920</id>
	<title>BILL LIVES!</title>
	<author>TiggertheMad</author>
	<datestamp>1258363740000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext>just had an ugly thought...."Windows17 for PC (Personal Cloud)"</htmltext>
<tokenext>just had an ugly thought.... " Windows17 for PC ( Personal Cloud ) "</tokentext>
<sentencetext>just had an ugly thought...."Windows17 for PC (Personal Cloud)"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119638</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30124612</id>
	<title>100 million cores not so much...</title>
	<author>w0mprat</author>
	<datestamp>1258380840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Exascale computing may seem mind bogglingly implausible at first glance, but one forgets that logic switch density goes up with the square of the process size reduction. A 1000-fold increase in computing is merely a 10x reduction in process size.
Intel seems confident silicon can approach this, although it may be the realm of graphene and nanotubes.
<br> <br>
1997/8 The first teraflops class supercomputers. We now have 32-45nm silicon.<br> <br>
2008/9: First petaflops class supercomputers. Today, teraflops computing is available in your desktop. A single $100 800 core GPU is theoretically a match for the 1997 #1 supercomputer. <br> <br>
2018/19: A single $100 ASIC should be capable of a petaflop. 3-4nm would be required to keep pace. Enter the era of exascale computing.
<br> <br>
Oddly Moore's law detractors have been so consitently wrong, the burden of proof is now on the critic.</htmltext>
<tokenext>Exascale computing may seem mind bogglingly implausible at first glance , but one forgets that logic switch density goes up with the square of the process size reduction .
A 1000-fold increase in computing is merely a 10x reduction in process size .
Intel seems confident silicon can approach this , although it may be the realm of graphene and nanotubes .
1997/8 The first teraflops class supercomputers .
We now have 32-45nm silicon .
2008/9 : First petaflops class supercomputers .
Today , teraflops computing is available in your desktop .
A single $ 100 800 core GPU is theoretically a match for the 1997 # 1 supercomputer .
2018/19 : A single $ 100 ASIC should be capable of a petaflop .
3-4nm would be required to keep pace .
Enter the era of exascale computing .
Oddly Moore 's law detractors have been so consitently wrong , the burden of proof is now on the critic .</tokentext>
<sentencetext>Exascale computing may seem mind bogglingly implausible at first glance, but one forgets that logic switch density goes up with the square of the process size reduction.
A 1000-fold increase in computing is merely a 10x reduction in process size.
Intel seems confident silicon can approach this, although it may be the realm of graphene and nanotubes.
1997/8 The first teraflops class supercomputers.
We now have 32-45nm silicon.
2008/9: First petaflops class supercomputers.
Today, teraflops computing is available in your desktop.
A single $100 800 core GPU is theoretically a match for the 1997 #1 supercomputer.
2018/19: A single $100 ASIC should be capable of a petaflop.
3-4nm would be required to keep pace.
Enter the era of exascale computing.
Oddly Moore's law detractors have been so consitently wrong, the burden of proof is now on the critic.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122412</id>
	<title>First Use of New Machine</title>
	<author>khelms</author>
	<datestamp>1258369200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Q: Is there a God?
<br>
A:<b> There is now!</b></htmltext>
<tokenext>Q : Is there a God ?
A : There is now !</tokentext>
<sentencetext>Q: Is there a God?
A: There is now!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122796</id>
	<title>Re:human brain</title>
	<author>Anonymous</author>
	<datestamp>1258370820000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Let's come up with an algorithm to simulate the human brain before we think about the amount of cores it requires.</p></htmltext>
<tokenext>Let 's come up with an algorithm to simulate the human brain before we think about the amount of cores it requires .</tokentext>
<sentencetext>Let's come up with an algorithm to simulate the human brain before we think about the amount of cores it requires.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119830</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120636</id>
	<title>Re:AMD vs Intel</title>
	<author>hattig</author>
	<datestamp>1258362540000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>Easy CPU upgrades because the socket interface stay the same.</p><p>Some of those supoercomputers might have gone from dual-core 2GHz Opteron K8s through quad-core Opteron K10s to these new sexa-core Opteron K10.5s with only the need to change the CPUs and the memory.</p><p>Or possibly if the upgrades were done at a board level, HyperTransport has remained compatible, so your new board of 24 cores just slots into your expensive, custom, HyperTransport-based back-end. To switch to Intel would require designing a QPI-based back-end.</p><p>Of course Magny-Cours and Bulldozer will use the G34 socket, so that's not a plug-in and go upgrade when they come out in 2010 and 2011 respectively. But it will be a stable platform for several years itself, and thus be attractive.</p></htmltext>
<tokenext>Easy CPU upgrades because the socket interface stay the same.Some of those supoercomputers might have gone from dual-core 2GHz Opteron K8s through quad-core Opteron K10s to these new sexa-core Opteron K10.5s with only the need to change the CPUs and the memory.Or possibly if the upgrades were done at a board level , HyperTransport has remained compatible , so your new board of 24 cores just slots into your expensive , custom , HyperTransport-based back-end .
To switch to Intel would require designing a QPI-based back-end.Of course Magny-Cours and Bulldozer will use the G34 socket , so that 's not a plug-in and go upgrade when they come out in 2010 and 2011 respectively .
But it will be a stable platform for several years itself , and thus be attractive .</tokentext>
<sentencetext>Easy CPU upgrades because the socket interface stay the same.Some of those supoercomputers might have gone from dual-core 2GHz Opteron K8s through quad-core Opteron K10s to these new sexa-core Opteron K10.5s with only the need to change the CPUs and the memory.Or possibly if the upgrades were done at a board level, HyperTransport has remained compatible, so your new board of 24 cores just slots into your expensive, custom, HyperTransport-based back-end.
To switch to Intel would require designing a QPI-based back-end.Of course Magny-Cours and Bulldozer will use the G34 socket, so that's not a plug-in and go upgrade when they come out in 2010 and 2011 respectively.
But it will be a stable platform for several years itself, and thus be attractive.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119704</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120350</id>
	<title>You sign your posts manually?</title>
	<author>Anonymous</author>
	<datestamp>1258404720000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>What's it like to be a primitive caveape in the modern world?</p></htmltext>
<tokenext>What 's it like to be a primitive caveape in the modern world ?</tokentext>
<sentencetext>What's it like to be a primitive caveape in the modern world?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119430</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30124602</id>
	<title>Re:Who's President, Future-boy?</title>
	<author>caywen</author>
	<datestamp>1258380780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>"The future is here" is probably the most false, yet constantly true statement ever.</p></htmltext>
<tokenext>" The future is here " is probably the most false , yet constantly true statement ever .</tokentext>
<sentencetext>"The future is here" is probably the most false, yet constantly true statement ever.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119870</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122552</id>
	<title>Re:human brain</title>
	<author>iprefermuffins</author>
	<datestamp>1258369680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>One for each neuron should do nicely...</htmltext>
<tokenext>One for each neuron should do nicely.. .</tokentext>
<sentencetext>One for each neuron should do nicely...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119830</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120378</id>
	<title>20 Megawatt power supply...</title>
	<author>tomhath</author>
	<datestamp>1258404780000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>IBM's design goal for an exascale system is to limit it to 20 megawatts of power<nobr> <wbr></nobr>,,,</p></div><p>Just keeping that sucker cooled will contribute to global warming. I hope they're going to use all that waste heat for something.</p></div>
	</htmltext>
<tokenext>IBM 's design goal for an exascale system is to limit it to 20 megawatts of power ,,,Just keeping that sucker cooled will contribute to global warming .
I hope they 're going to use all that waste heat for something .</tokentext>
<sentencetext>IBM's design goal for an exascale system is to limit it to 20 megawatts of power ,,,Just keeping that sucker cooled will contribute to global warming.
I hope they're going to use all that waste heat for something.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120014</id>
	<title>synergy</title>
	<author>Anonymous</author>
	<datestamp>1258403580000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>They should obviously start working with the Mandlebulb people..</p></htmltext>
<tokenext>They should obviously start working with the Mandlebulb people. .</tokentext>
<sentencetext>They should obviously start working with the Mandlebulb people..</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125256</id>
	<title>Re:Who's President, Future-boy?</title>
	<author>Xyrus</author>
	<datestamp>1258386960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Especially considering the software developed for such platforms. Most models run on supercomputers are written in Fortran. If there is an real software engineering, it's more of an afterthought. The core of some of these models still contain F77 code, with F77 styles and program structure complete with detailed variable names such as iqxn4. They're not well documented. They're very fragile. Porting to a new system (even if running just a different flavor of Linux) can take weeks to months because of the way implementation is done. Sometimes just going to a new version of the same compiler will cause things to break. And sometimes you need to be an advanced linux just to get everything to run, let alone compiling and linking.</p><p>Models up to this point have been ad hoc conglomerations of pieces of code individuals have written and managed to stick together. These often are individuals who were not trained to be software engineers and know enough about programming to be dangerous. Sometimes these codes were never intended to be used outside of a research paper (but were used anyway). They models work, but not much else.</p><p>In fact, if you want a prime examples of how software should NOT be written, there's a number of scientific models that fit the bill (complete with 10,000 line subroutines).</p><p>As far as supercomputing goes, we are just barely emerging from the "highschool hacker" days. If you want an application that scales to these levels effectively, you need to have some people who know what they're doing get involved. You also need the tools, which have been sorely lacking in this area.</p><p>Ah well. Bitch and moan.</p><p>~X~</p></htmltext>
<tokenext>Especially considering the software developed for such platforms .
Most models run on supercomputers are written in Fortran .
If there is an real software engineering , it 's more of an afterthought .
The core of some of these models still contain F77 code , with F77 styles and program structure complete with detailed variable names such as iqxn4 .
They 're not well documented .
They 're very fragile .
Porting to a new system ( even if running just a different flavor of Linux ) can take weeks to months because of the way implementation is done .
Sometimes just going to a new version of the same compiler will cause things to break .
And sometimes you need to be an advanced linux just to get everything to run , let alone compiling and linking.Models up to this point have been ad hoc conglomerations of pieces of code individuals have written and managed to stick together .
These often are individuals who were not trained to be software engineers and know enough about programming to be dangerous .
Sometimes these codes were never intended to be used outside of a research paper ( but were used anyway ) .
They models work , but not much else.In fact , if you want a prime examples of how software should NOT be written , there 's a number of scientific models that fit the bill ( complete with 10,000 line subroutines ) .As far as supercomputing goes , we are just barely emerging from the " highschool hacker " days .
If you want an application that scales to these levels effectively , you need to have some people who know what they 're doing get involved .
You also need the tools , which have been sorely lacking in this area.Ah well .
Bitch and moan. ~ X ~</tokentext>
<sentencetext>Especially considering the software developed for such platforms.
Most models run on supercomputers are written in Fortran.
If there is an real software engineering, it's more of an afterthought.
The core of some of these models still contain F77 code, with F77 styles and program structure complete with detailed variable names such as iqxn4.
They're not well documented.
They're very fragile.
Porting to a new system (even if running just a different flavor of Linux) can take weeks to months because of the way implementation is done.
Sometimes just going to a new version of the same compiler will cause things to break.
And sometimes you need to be an advanced linux just to get everything to run, let alone compiling and linking.Models up to this point have been ad hoc conglomerations of pieces of code individuals have written and managed to stick together.
These often are individuals who were not trained to be software engineers and know enough about programming to be dangerous.
Sometimes these codes were never intended to be used outside of a research paper (but were used anyway).
They models work, but not much else.In fact, if you want a prime examples of how software should NOT be written, there's a number of scientific models that fit the bill (complete with 10,000 line subroutines).As far as supercomputing goes, we are just barely emerging from the "highschool hacker" days.
If you want an application that scales to these levels effectively, you need to have some people who know what they're doing get involved.
You also need the tools, which have been sorely lacking in this area.Ah well.
Bitch and moan.~X~</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120076</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120046</id>
	<title>Processing power</title>
	<author>Wowsers</author>
	<datestamp>1258403760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Is this going to be the new processor requirement for running Flash in a web browser?</p></htmltext>
<tokenext>Is this going to be the new processor requirement for running Flash in a web browser ?</tokentext>
<sentencetext>Is this going to be the new processor requirement for running Flash in a web browser?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120882</id>
	<title>Re:Who's President, Future-boy?</title>
	<author>turgid</author>
	<datestamp>1258363620000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p> <i>When a computer develops a mind of it's own in a logical manner it's starting to reach the human level and we can start to discuss if it's primitive or not. If it starts to reproduce on it's own it's time to be careful.</i>
</p><p>That's not directly related to computing power per se. A computer 100 000 000 times as powerful as today's, running today's software will still not have developed a mind of its own. It'll just be very, very fast indeed.</p></htmltext>
<tokenext>When a computer develops a mind of it 's own in a logical manner it 's starting to reach the human level and we can start to discuss if it 's primitive or not .
If it starts to reproduce on it 's own it 's time to be careful .
That 's not directly related to computing power per se .
A computer 100 000 000 times as powerful as today 's , running today 's software will still not have developed a mind of its own .
It 'll just be very , very fast indeed .</tokentext>
<sentencetext> When a computer develops a mind of it's own in a logical manner it's starting to reach the human level and we can start to discuss if it's primitive or not.
If it starts to reproduce on it's own it's time to be careful.
That's not directly related to computing power per se.
A computer 100 000 000 times as powerful as today's, running today's software will still not have developed a mind of its own.
It'll just be very, very fast indeed.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119978</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119986</id>
	<title>Re:100 Million?</title>
	<author>Stupid McStupidson</author>
	<datestamp>1258403520000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Fuck everything, we're going 200 million cores</htmltext>
<tokenext>Fuck everything , we 're going 200 million cores</tokentext>
<sentencetext>Fuck everything, we're going 200 million cores</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121860</id>
	<title>Re:Partly a software problem. Erlang?</title>
	<author>shmlco</author>
	<datestamp>1258367460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>"... by separating your raw data into 100 raw input files, one for each page..."</p><p>Except that most reports have totals and breaks and subtotals that wouldn't work across such pagination. In fact, you'd practically have to paginate the entire report first to determine what elements to assign to each page.</p></htmltext>
<tokenext>" ... by separating your raw data into 100 raw input files , one for each page... " Except that most reports have totals and breaks and subtotals that would n't work across such pagination .
In fact , you 'd practically have to paginate the entire report first to determine what elements to assign to each page .</tokentext>
<sentencetext>"... by separating your raw data into 100 raw input files, one for each page..."Except that most reports have totals and breaks and subtotals that wouldn't work across such pagination.
In fact, you'd practically have to paginate the entire report first to determine what elements to assign to each page.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120444</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120354</id>
	<title>Re:AMD vs Intel</title>
	<author>Anonymous</author>
	<datestamp>1258404720000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Not energy saving, since the Cell based ones are clearly on top in performance/Watt. Note that number 2 is actually an AMD/Cell hybrid where the majority of flops are provided by Cell. It has 2/3 the performance of number 1 but 1/3 the power consumption. And the x86 based that has the same power consumption has half the flops.</p><p>I'm curious to see what it will be in 1 year, since Power7 might be a serious contender. (Power6 has been IBM's Pentium IV, very high clock speed for limited performance and performance/Watt, but first Power7 early benchmarks look much better).</p><p>Power7 blades will be more expensive but the Power7 also has in theory more memory bandwidth, scales better to a larger number of threads per memory coherency domain, and may have much better performance/Watt. In this case, the larger upfront costs may not be decisive (especially when the difference is counted in MW, evacuating them has an impact on the infrastructure).</p></htmltext>
<tokenext>Not energy saving , since the Cell based ones are clearly on top in performance/Watt .
Note that number 2 is actually an AMD/Cell hybrid where the majority of flops are provided by Cell .
It has 2/3 the performance of number 1 but 1/3 the power consumption .
And the x86 based that has the same power consumption has half the flops.I 'm curious to see what it will be in 1 year , since Power7 might be a serious contender .
( Power6 has been IBM 's Pentium IV , very high clock speed for limited performance and performance/Watt , but first Power7 early benchmarks look much better ) .Power7 blades will be more expensive but the Power7 also has in theory more memory bandwidth , scales better to a larger number of threads per memory coherency domain , and may have much better performance/Watt .
In this case , the larger upfront costs may not be decisive ( especially when the difference is counted in MW , evacuating them has an impact on the infrastructure ) .</tokentext>
<sentencetext>Not energy saving, since the Cell based ones are clearly on top in performance/Watt.
Note that number 2 is actually an AMD/Cell hybrid where the majority of flops are provided by Cell.
It has 2/3 the performance of number 1 but 1/3 the power consumption.
And the x86 based that has the same power consumption has half the flops.I'm curious to see what it will be in 1 year, since Power7 might be a serious contender.
(Power6 has been IBM's Pentium IV, very high clock speed for limited performance and performance/Watt, but first Power7 early benchmarks look much better).Power7 blades will be more expensive but the Power7 also has in theory more memory bandwidth, scales better to a larger number of threads per memory coherency domain, and may have much better performance/Watt.
In this case, the larger upfront costs may not be decisive (especially when the difference is counted in MW, evacuating them has an impact on the infrastructure).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119704</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30137558</id>
	<title>which the US is co-developing</title>
	<author>uigin</author>
	<datestamp>1258463340000</datestamp>
	<modclass>None</modclass>
	<modscore>-1</modscore>
	<htmltext><p>"...which the US is co-developing." Slightly biased don't you think? It's a global project with a lot of nations contributing major bucks.</p></htmltext>
<tokenext>" ...which the US is co-developing .
" Slightly biased do n't you think ?
It 's a global project with a lot of nations contributing major bucks .</tokentext>
<sentencetext>"...which the US is co-developing.
" Slightly biased don't you think?
It's a global project with a lot of nations contributing major bucks.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122232</id>
	<title>Funny, fusion...</title>
	<author>bryan1945</author>
	<datestamp>1258368540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Probably will need a fusion plant to power and cool the thing.  But still sounds awesome.  They briefly mention data/memory flow issues, but don't really address it.  It is getting to the point where data flow will be as important as processing power, especially as you have escalating processors.  You can run as many operations as you want, but if it can't be delivered somewhere useful, then they are wasted.  I am also very interested on how the overhead will be managed when this many processors are involved.  Multi-processors are not quite 2x (or 4x, 8x, etc) than just one processor due overhead, and even a really specialized scaled OS &amp; I/O system won't be able to overcome this many processors.</p><p>Now for some fun:<br>It could probably power real time rendering of a Beowulf cluster of Natalie Portmans in grits while making us submit "All Our Base" to our new "Insert-Here Overlords".  (Did I miss any?)</p></htmltext>
<tokenext>Probably will need a fusion plant to power and cool the thing .
But still sounds awesome .
They briefly mention data/memory flow issues , but do n't really address it .
It is getting to the point where data flow will be as important as processing power , especially as you have escalating processors .
You can run as many operations as you want , but if it ca n't be delivered somewhere useful , then they are wasted .
I am also very interested on how the overhead will be managed when this many processors are involved .
Multi-processors are not quite 2x ( or 4x , 8x , etc ) than just one processor due overhead , and even a really specialized scaled OS &amp; I/O system wo n't be able to overcome this many processors.Now for some fun : It could probably power real time rendering of a Beowulf cluster of Natalie Portmans in grits while making us submit " All Our Base " to our new " Insert-Here Overlords " .
( Did I miss any ?
)</tokentext>
<sentencetext>Probably will need a fusion plant to power and cool the thing.
But still sounds awesome.
They briefly mention data/memory flow issues, but don't really address it.
It is getting to the point where data flow will be as important as processing power, especially as you have escalating processors.
You can run as many operations as you want, but if it can't be delivered somewhere useful, then they are wasted.
I am also very interested on how the overhead will be managed when this many processors are involved.
Multi-processors are not quite 2x (or 4x, 8x, etc) than just one processor due overhead, and even a really specialized scaled OS &amp; I/O system won't be able to overcome this many processors.Now for some fun:It could probably power real time rendering of a Beowulf cluster of Natalie Portmans in grits while making us submit "All Our Base" to our new "Insert-Here Overlords".
(Did I miss any?
)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120438</id>
	<title>Re:Who's President, Future-boy?</title>
	<author>pete-classic</author>
	<datestamp>1258404960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Wow, a bunch of people didn't get what I thought was a simple point.</p><p>I understand that the current state of supercomputing will seem primitive at some point in the future.  In fact, my post is predicated on that notion.</p><p>But words mean things.  At some point in the future everything about our current state of culture and technology will seem primitive.  Describing the current state-of-the-art as primitive is meaningless.  That approach can be applied to any topic equally.</p><p>Let me illustrate by counter-example.  "The practice of medicine in parts of sub-Saharan Africa remains primitive."  See how I'm creating a contrast that conveys meaning?  Such a contrast only exists in the summary if the author has some frame of reference extending into the future.</p><p>Also, I was making a Back to the Future reference.</p><p>*shrug*</p><p>-Peter</p></htmltext>
<tokenext>Wow , a bunch of people did n't get what I thought was a simple point.I understand that the current state of supercomputing will seem primitive at some point in the future .
In fact , my post is predicated on that notion.But words mean things .
At some point in the future everything about our current state of culture and technology will seem primitive .
Describing the current state-of-the-art as primitive is meaningless .
That approach can be applied to any topic equally.Let me illustrate by counter-example .
" The practice of medicine in parts of sub-Saharan Africa remains primitive .
" See how I 'm creating a contrast that conveys meaning ?
Such a contrast only exists in the summary if the author has some frame of reference extending into the future.Also , I was making a Back to the Future reference .
* shrug * -Peter</tokentext>
<sentencetext>Wow, a bunch of people didn't get what I thought was a simple point.I understand that the current state of supercomputing will seem primitive at some point in the future.
In fact, my post is predicated on that notion.But words mean things.
At some point in the future everything about our current state of culture and technology will seem primitive.
Describing the current state-of-the-art as primitive is meaningless.
That approach can be applied to any topic equally.Let me illustrate by counter-example.
"The practice of medicine in parts of sub-Saharan Africa remains primitive.
"  See how I'm creating a contrast that conveys meaning?
Such a contrast only exists in the summary if the author has some frame of reference extending into the future.Also, I was making a Back to the Future reference.
*shrug*-Peter</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119430</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119830</id>
	<title>human brain</title>
	<author>simoncpu was here</author>
	<datestamp>1258403040000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext>How many cores do we need to simulate a human brain?</htmltext>
<tokenext>How many cores do we need to simulate a human brain ?</tokentext>
<sentencetext>How many cores do we need to simulate a human brain?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30124664</id>
	<title>Re:Funny, fusion...</title>
	<author>Anonymous</author>
	<datestamp>1258381380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>It could probably power real time rendering of a Beowulf cluster of Natalie Portmans in grits while making us submit "All Our Base" to our new "Insert-Here Overlords".  (Did I miss any?)</p></div><p>In Soviet Russia, warn out memes miss you!</p></div>
	</htmltext>
<tokenext>It could probably power real time rendering of a Beowulf cluster of Natalie Portmans in grits while making us submit " All Our Base " to our new " Insert-Here Overlords " .
( Did I miss any ?
) In Soviet Russia , warn out memes miss you !</tokentext>
<sentencetext>It could probably power real time rendering of a Beowulf cluster of Natalie Portmans in grits while making us submit "All Our Base" to our new "Insert-Here Overlords".
(Did I miss any?
)In Soviet Russia, warn out memes miss you!
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122232</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120316</id>
	<title>One Hundred Billion Cores</title>
	<author>Anonymous</author>
	<datestamp>1258404600000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Made by sharks with frikkin lasers on their heads.</p><p>Is it just me or does this news message just shout Dr. Evil?</p></htmltext>
<tokenext>Made by sharks with frikkin lasers on their heads.Is it just me or does this news message just shout Dr. Evil ?</tokentext>
<sentencetext>Made by sharks with frikkin lasers on their heads.Is it just me or does this news message just shout Dr. Evil?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119694</id>
	<title>Re:100 Million?</title>
	<author>Anonymous</author>
	<datestamp>1258402740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The definition of "supercomputer" changes as time goes by. Today's cellphones are yesterday's supercomputers.</p></htmltext>
<tokenext>The definition of " supercomputer " changes as time goes by .
Today 's cellphones are yesterday 's supercomputers .</tokentext>
<sentencetext>The definition of "supercomputer" changes as time goes by.
Today's cellphones are yesterday's supercomputers.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121100</id>
	<title>Re:Limits on simulation.</title>
	<author>David Greene</author>
	<datestamp>1258364520000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>
Further even in the nice cases like fluid flow, if one tries to do solution adaptive meshing, no uniform grids etc, the time step slows down so much the simulation takes too long even on a 100 million processor machine.
</p></div><p>That's true in general.  However, techniques like dynamic scheduling can help.  Work stealing algorithms and other tricks will probably become part of the general programming model as we move forward.  More and more of this has to be pushed to compilers, runtimes and libraries.
</p></div>
	</htmltext>
<tokenext>Further even in the nice cases like fluid flow , if one tries to do solution adaptive meshing , no uniform grids etc , the time step slows down so much the simulation takes too long even on a 100 million processor machine .
That 's true in general .
However , techniques like dynamic scheduling can help .
Work stealing algorithms and other tricks will probably become part of the general programming model as we move forward .
More and more of this has to be pushed to compilers , runtimes and libraries .</tokentext>
<sentencetext>
Further even in the nice cases like fluid flow, if one tries to do solution adaptive meshing, no uniform grids etc, the time step slows down so much the simulation takes too long even on a 100 million processor machine.
That's true in general.
However, techniques like dynamic scheduling can help.
Work stealing algorithms and other tricks will probably become part of the general programming model as we move forward.
More and more of this has to be pushed to compilers, runtimes and libraries.

	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119614</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120176</id>
	<title>Re:How many problems can these systems really solv</title>
	<author>2obvious4u</author>
	<datestamp>1258404180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Parallel computing is great for solving NP-Complete problems.  If you have enough cores for every possible solution you can have all possible paths process at the same time and compare the results.</htmltext>
<tokenext>Parallel computing is great for solving NP-Complete problems .
If you have enough cores for every possible solution you can have all possible paths process at the same time and compare the results .</tokentext>
<sentencetext>Parallel computing is great for solving NP-Complete problems.
If you have enough cores for every possible solution you can have all possible paths process at the same time and compare the results.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119624</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121146</id>
	<title>Interconnect?</title>
	<author>nokiator</author>
	<datestamp>1258364760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Scaling the number of cores to 100 Million by 2018: A side effect of Moore's Law.<p>
Low latency, high bandwidth interconnect that can mesh 100 Million cores: The Next Big Problem in computer architecture.</p></htmltext>
<tokenext>Scaling the number of cores to 100 Million by 2018 : A side effect of Moore 's Law .
Low latency , high bandwidth interconnect that can mesh 100 Million cores : The Next Big Problem in computer architecture .</tokentext>
<sentencetext>Scaling the number of cores to 100 Million by 2018: A side effect of Moore's Law.
Low latency, high bandwidth interconnect that can mesh 100 Million cores: The Next Big Problem in computer architecture.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122668</id>
	<title>what do we do</title>
	<author>confused one</author>
	<datestamp>1258370100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>when these exascale systems start asking questions and/or making demands?</htmltext>
<tokenext>when these exascale systems start asking questions and/or making demands ?</tokentext>
<sentencetext>when these exascale systems start asking questions and/or making demands?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121324</id>
	<title>Re:Oink, oink</title>
	<author>NewbieProgrammerMan</author>
	<datestamp>1258365420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>It's striking how few supercomputers are sold to commercial companies.</p></div><p>I'm sure that in the early 20th century somebody was saying, "It's striking how few airplanes are sold to commercial companies," and going on to draw the conclusion that government spending on aircraft was a pork program.  (And I'm sure there was some pure pork spending involved, but 100 years later, the overall effect of that kind of spending lets us use airplanes for things that would have been unthinkably expensive when people started spending money on them).</p><p>Today's supercomputer is the next decade's mid-range workstation.  (Yes, I know, duh.)</p></div>
	</htmltext>
<tokenext>It 's striking how few supercomputers are sold to commercial companies.I 'm sure that in the early 20th century somebody was saying , " It 's striking how few airplanes are sold to commercial companies , " and going on to draw the conclusion that government spending on aircraft was a pork program .
( And I 'm sure there was some pure pork spending involved , but 100 years later , the overall effect of that kind of spending lets us use airplanes for things that would have been unthinkably expensive when people started spending money on them ) .Today 's supercomputer is the next decade 's mid-range workstation .
( Yes , I know , duh .
)</tokentext>
<sentencetext>It's striking how few supercomputers are sold to commercial companies.I'm sure that in the early 20th century somebody was saying, "It's striking how few airplanes are sold to commercial companies," and going on to draw the conclusion that government spending on aircraft was a pork program.
(And I'm sure there was some pure pork spending involved, but 100 years later, the overall effect of that kind of spending lets us use airplanes for things that would have been unthinkably expensive when people started spending money on them).Today's supercomputer is the next decade's mid-range workstation.
(Yes, I know, duh.
)
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119698</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119820</id>
	<title>Windows 2018</title>
	<author>Anonymous</author>
	<datestamp>1258403040000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>Maybe this thing will have enough power to run Windows by 2018??</p></htmltext>
<tokenext>Maybe this thing will have enough power to run Windows by 2018 ?
?</tokentext>
<sentencetext>Maybe this thing will have enough power to run Windows by 2018?
?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120742</id>
	<title>Re:100 Million?</title>
	<author>TheKidWho</author>
	<datestamp>1258363020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Well, what was missing from the article summary is that this computer is going to be built using nVidia GPUs, not CPUs for the majority of computing...</p><p>Although really, with the way Fermi is shaping up, it is turning into a very specialized CPU.</p></htmltext>
<tokenext>Well , what was missing from the article summary is that this computer is going to be built using nVidia GPUs , not CPUs for the majority of computing...Although really , with the way Fermi is shaping up , it is turning into a very specialized CPU .</tokentext>
<sentencetext>Well, what was missing from the article summary is that this computer is going to be built using nVidia GPUs, not CPUs for the majority of computing...Although really, with the way Fermi is shaping up, it is turning into a very specialized CPU.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120464</id>
	<title>Re:AMD vs Intel</title>
	<author>confused one</author>
	<datestamp>1258405020000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext>I'd be guessing but here are three possible reasons AMD might be in that place:<br>
1.)  Value, ie. lower cost per processor<br>
2.)  Opteron has built in straight forward 4-way and 8-way multiprocessor connectivity, Xeon was limited to 2-way connectivity without extra bridge hardware, until recently.<br>
3.)  Opteron has higher memory bandwidth than P4 or Core 2 arch.</htmltext>
<tokenext>I 'd be guessing but here are three possible reasons AMD might be in that place : 1 .
) Value , ie .
lower cost per processor 2 .
) Opteron has built in straight forward 4-way and 8-way multiprocessor connectivity , Xeon was limited to 2-way connectivity without extra bridge hardware , until recently .
3. ) Opteron has higher memory bandwidth than P4 or Core 2 arch .</tokentext>
<sentencetext>I'd be guessing but here are three possible reasons AMD might be in that place:
1.
)  Value, ie.
lower cost per processor
2.
)  Opteron has built in straight forward 4-way and 8-way multiprocessor connectivity, Xeon was limited to 2-way connectivity without extra bridge hardware, until recently.
3.)  Opteron has higher memory bandwidth than P4 or Core 2 arch.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119704</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30142636</id>
	<title>Re:Limits on simulation.</title>
	<author>spirito</author>
	<datestamp>1257086940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>You are right, fluid dynamics simulations parallelize beautifully, but once you start increasing the number of cores the communication between machines will slow things down.
And you can have implicit, time accurate, temporal schemes for which the stability condition is (theoretically) CFL less than infinity. But it's clear that if you want to resolve turbulence time scales (and length scales) on a complicated case with relatively high Reynolds number a 100 million processor machine may not be enough.</htmltext>
<tokenext>You are right , fluid dynamics simulations parallelize beautifully , but once you start increasing the number of cores the communication between machines will slow things down .
And you can have implicit , time accurate , temporal schemes for which the stability condition is ( theoretically ) CFL less than infinity .
But it 's clear that if you want to resolve turbulence time scales ( and length scales ) on a complicated case with relatively high Reynolds number a 100 million processor machine may not be enough .</tokentext>
<sentencetext>You are right, fluid dynamics simulations parallelize beautifully, but once you start increasing the number of cores the communication between machines will slow things down.
And you can have implicit, time accurate, temporal schemes for which the stability condition is (theoretically) CFL less than infinity.
But it's clear that if you want to resolve turbulence time scales (and length scales) on a complicated case with relatively high Reynolds number a 100 million processor machine may not be enough.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119614</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120078</id>
	<title>Re:human brain</title>
	<author>Anonymous</author>
	<datestamp>1258403880000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Only 1, so long as the simulation segfaults.</p></htmltext>
<tokenext>Only 1 , so long as the simulation segfaults .</tokentext>
<sentencetext>Only 1, so long as the simulation segfaults.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119830</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30123280</id>
	<title>Re:human brain</title>
	<author>AmigaMMC</author>
	<datestamp>1258372920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>About a Gazibillion</htmltext>
<tokenext>About a Gazibillion</tokentext>
<sentencetext>About a Gazibillion</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119830</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30124784</id>
	<title>I've often wondered...</title>
	<author>petrus4</author>
	<datestamp>1258382520000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>...what might happen if we could run a copy of The Sims on a truly massive supercomputer.  It would need to be somewhat customised for that particular machine/environment, of course, but I think it could be interesting.</p><p>There were times when I did see something close to genuinely emergent behaviour in the Sims 2, or more specifically, emergent combinations of pre-existing routines.  You need to set things up for them in a way which is somewhat out of the box, and definitely not in line with real world human architectural or aesthetic norms, but it can happen.</p><p>Makes me think; if we could run the Sims, or the bots from some currently existing FPS, parallel on a sufficiently large scale, we might eventually start seeing some very interesting results come from it, at least within the contexts of said games.</p></htmltext>
<tokenext>...what might happen if we could run a copy of The Sims on a truly massive supercomputer .
It would need to be somewhat customised for that particular machine/environment , of course , but I think it could be interesting.There were times when I did see something close to genuinely emergent behaviour in the Sims 2 , or more specifically , emergent combinations of pre-existing routines .
You need to set things up for them in a way which is somewhat out of the box , and definitely not in line with real world human architectural or aesthetic norms , but it can happen.Makes me think ; if we could run the Sims , or the bots from some currently existing FPS , parallel on a sufficiently large scale , we might eventually start seeing some very interesting results come from it , at least within the contexts of said games .</tokentext>
<sentencetext>...what might happen if we could run a copy of The Sims on a truly massive supercomputer.
It would need to be somewhat customised for that particular machine/environment, of course, but I think it could be interesting.There were times when I did see something close to genuinely emergent behaviour in the Sims 2, or more specifically, emergent combinations of pre-existing routines.
You need to set things up for them in a way which is somewhat out of the box, and definitely not in line with real world human architectural or aesthetic norms, but it can happen.Makes me think; if we could run the Sims, or the bots from some currently existing FPS, parallel on a sufficiently large scale, we might eventually start seeing some very interesting results come from it, at least within the contexts of said games.</sentencetext>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125640
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_47</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119430
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120286
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_52</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119830
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122796
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119430
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125246
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119944
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30124432
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_43</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119600
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119624
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120006
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119604
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119960
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120886
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119698
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30123142
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119830
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30124756
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_68</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119960
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125350
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119830
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30124572
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_67</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119704
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120168
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_58</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119642
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120624
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_42</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119624
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120106
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119698
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120714
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_38</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119430
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120438
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_41</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119624
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30123620
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120742
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119698
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120452
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_66</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119624
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120644
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_71</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119960
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125154
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_56</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119624
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125528
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119624
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120176
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_61</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119830
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122872
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_35</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119830
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120078
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_40</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119960
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30126904
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119830
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30129884
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119642
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121212
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119430
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120076
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125496
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119596
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119750
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120182
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_59</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119430
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120642
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119830
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30124384
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_53</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119830
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30123280
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_49</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119614
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30123388
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_60</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120122
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_46</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119698
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121324
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119430
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120350
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119830
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122692
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_50</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119430
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120076
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125256
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_45</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119638
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120920
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119430
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119870
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30124602
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119704
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120354
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121972
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119430
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119658
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_48</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119614
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125726
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_51</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119780
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120444
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121860
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_65</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119698
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120206
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_39</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119704
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120636
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_44</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122868
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_72</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119614
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30142636
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119430
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120616
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119960
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125690
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119960
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30128260
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_36</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119986
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_57</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119960
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121552
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119614
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122880
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30124032
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_64</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119614
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121100
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_63</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119830
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120776
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_54</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119430
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119870
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30123654
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_37</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119698
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30123560
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119830
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30134098
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_70</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119614
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30123106
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122232
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30124664
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119694
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122802
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121892
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119704
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120464
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_55</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119704
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120648
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119430
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119978
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120882
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_69</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30123288
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_16_1849259_62</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119830
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122552
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_16_1849259.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122566
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_16_1849259.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121144
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_16_1849259.24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120212
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_16_1849259.22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120046
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_16_1849259.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121196
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_16_1849259.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119830
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122552
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122872
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120078
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30134098
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30123280
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30124756
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30124384
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30129884
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120776
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122796
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30124572
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122692
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_16_1849259.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121146
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_16_1849259.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122124
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_16_1849259.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119430
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119870
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30124602
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30123654
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120076
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125496
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125256
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120616
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119978
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120882
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120438
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125246
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120286
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120642
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120350
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119658
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_16_1849259.23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119860
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_16_1849259.21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119698
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120452
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30123560
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121324
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120206
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30123142
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120714
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_16_1849259.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120378
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_16_1849259.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119638
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120920
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_16_1849259.16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119704
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120168
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120636
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120464
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120354
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120648
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_16_1849259.19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30124784
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_16_1849259.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119642
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121212
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120624
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_16_1849259.17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119614
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122880
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30124032
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125726
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30123388
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121100
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30123106
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30142636
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_16_1849259.18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122232
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30124664
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_16_1849259.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119624
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120176
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30123620
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120006
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125528
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120644
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120106
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_16_1849259.20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119820
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_16_1849259.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119780
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120444
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121860
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_16_1849259.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119596
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119750
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_16_1849259.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119748
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_16_1849259.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120134
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_16_1849259.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119420
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119944
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30124432
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119694
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122802
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119986
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121892
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119960
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125350
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120886
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125154
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30128260
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30126904
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121552
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125690
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30125640
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120182
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120122
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30121972
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30122868
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119604
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30119600
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30123288
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_16_1849259.30120742
</commentlist>
</conversation>
