<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_06_07_184205</id>
	<title>New Languages Vs. Old  For Parallel Programming</title>
	<author>timothy</author>
	<datestamp>1244399400000</datestamp>
	<htmltext><a href="http://www.joabj.com/" rel="nofollow">joabj</a> writes <i>"Getting the most from multicore processors is becoming <a href="http://gcn.com/articles/2009/06/04/java-multicore-programming-tips.aspx">an increasingly difficult task</a> for programmers. DARPA has commissioned a number of new programming languages, notably X10 and Chapel, written especially for developing programs that can be run across multiple processors, <a href="http://gcn.com/blogs/tech-blog/2009/06/new-parallel-processing-languages.aspx">though others see them as too much of a departure</a> to ever gain widespread usage among coders."</i></htmltext>
<tokenext>joabj writes " Getting the most from multicore processors is becoming an increasingly difficult task for programmers .
DARPA has commissioned a number of new programming languages , notably X10 and Chapel , written especially for developing programs that can be run across multiple processors , though others see them as too much of a departure to ever gain widespread usage among coders .
"</tokentext>
<sentencetext>joabj writes "Getting the most from multicore processors is becoming an increasingly difficult task for programmers.
DARPA has commissioned a number of new programming languages, notably X10 and Chapel, written especially for developing programs that can be run across multiple processors, though others see them as too much of a departure to ever gain widespread usage among coders.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243175</id>
	<title>Re:Parallel programming is dead. No one uses it...</title>
	<author>MaXintosh</author>
	<datestamp>1244404800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Not even all scientists. I use a lot of programs that are computationally intensive - stuff that you start running and walk away for a week - and I'd guess about half of the new programs are not written to take advantage of parallel processing. For us, this is seriously frustrating because... well, I don't like having to wait a week for the software to return a collection of answers, even using powerful machines.<br> <br>A big problem for scientific computing - and maybe it's a problem elsewhere, too - is that too many programs are collections of legacy code squashed just-so to make it compilable on a new machine. While I typically applaud the path of least resistance when it comes to work, it makes the software inefficient as heck.</htmltext>
<tokenext>Not even all scientists .
I use a lot of programs that are computationally intensive - stuff that you start running and walk away for a week - and I 'd guess about half of the new programs are not written to take advantage of parallel processing .
For us , this is seriously frustrating because... well , I do n't like having to wait a week for the software to return a collection of answers , even using powerful machines .
A big problem for scientific computing - and maybe it 's a problem elsewhere , too - is that too many programs are collections of legacy code squashed just-so to make it compilable on a new machine .
While I typically applaud the path of least resistance when it comes to work , it makes the software inefficient as heck .</tokentext>
<sentencetext>Not even all scientists.
I use a lot of programs that are computationally intensive - stuff that you start running and walk away for a week - and I'd guess about half of the new programs are not written to take advantage of parallel processing.
For us, this is seriously frustrating because... well, I don't like having to wait a week for the software to return a collection of answers, even using powerful machines.
A big problem for scientific computing - and maybe it's a problem elsewhere, too - is that too many programs are collections of legacy code squashed just-so to make it compilable on a new machine.
While I typically applaud the path of least resistance when it comes to work, it makes the software inefficient as heck.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243023</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243165</id>
	<title>Re:What's so hard?</title>
	<author>Anonymous</author>
	<datestamp>1244404740000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>It's not creating threads that's hard - it's getting them to communicate with each other, without ever getting into a situation where thread a is waiting for thread b and thread b is waiting for thread a that's hard.</p></htmltext>
<tokenext>It 's not creating threads that 's hard - it 's getting them to communicate with each other , without ever getting into a situation where thread a is waiting for thread b and thread b is waiting for thread a that 's hard .</tokentext>
<sentencetext>It's not creating threads that's hard - it's getting them to communicate with each other, without ever getting into a situation where thread a is waiting for thread b and thread b is waiting for thread a that's hard.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28246169</id>
	<title>Re:What's so hard?</title>
	<author>vikstar</author>
	<datestamp>1244386440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>&gt;&gt; c(0:999) = a(0:999) + b(0:999)</p><p>Matlab.</p></htmltext>
<tokenext>&gt; &gt; c ( 0 : 999 ) = a ( 0 : 999 ) + b ( 0 : 999 ) Matlab .</tokentext>
<sentencetext>&gt;&gt; c(0:999) = a(0:999) + b(0:999)Matlab.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243181</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28248249</id>
	<title>Re:Old languages designed for parallel processing?</title>
	<author>lordholm</author>
	<datestamp>1244452500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I worked with Ada for about two years. It is an awful language. Most of the issues that Ada solves are solvable in C by using assert.</p><p>The main problem with the language is that it is to complex and big, this means that it is impossible to keep the language in your head, you have to look-up syntax rules while you are coding, and this even when you have been writing it for several years.</p><p>The only nice thing in the language is the concurrency model which actually simplifies the usage of tasks.</p></htmltext>
<tokenext>I worked with Ada for about two years .
It is an awful language .
Most of the issues that Ada solves are solvable in C by using assert.The main problem with the language is that it is to complex and big , this means that it is impossible to keep the language in your head , you have to look-up syntax rules while you are coding , and this even when you have been writing it for several years.The only nice thing in the language is the concurrency model which actually simplifies the usage of tasks .</tokentext>
<sentencetext>I worked with Ada for about two years.
It is an awful language.
Most of the issues that Ada solves are solvable in C by using assert.The main problem with the language is that it is to complex and big, this means that it is impossible to keep the language in your head, you have to look-up syntax rules while you are coding, and this even when you have been writing it for several years.The only nice thing in the language is the concurrency model which actually simplifies the usage of tasks.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244255</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243351</id>
	<title>Re:Awful example in the article</title>
	<author>Anonymous</author>
	<datestamp>1244406180000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>The example in the article is atrocious.</p><p>Why would you want the withdrawal and balance check to run concurrently?</p></div><p>Because it would make it much easier to profit from self-introduced race conditions and other obscure bugs when I get to the ATM tomorrow<nobr> <wbr></nobr>:)</p></div>
	</htmltext>
<tokenext>The example in the article is atrocious.Why would you want the withdrawal and balance check to run concurrently ? Because it would make it much easier to profit from self-introduced race conditions and other obscure bugs when I get to the ATM tomorrow : )</tokentext>
<sentencetext>The example in the article is atrocious.Why would you want the withdrawal and balance check to run concurrently?Because it would make it much easier to profit from self-introduced race conditions and other obscure bugs when I get to the ATM tomorrow :)
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243129</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28247157</id>
	<title>LIBRARIES!!</title>
	<author>HiThere</author>
	<datestamp>1244396460000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>The main problem faced by each new language is "How do I access all the stuff that's already been done?"</p><p>The "Do it over again" answer hasn't been successful since Sun pushed Java, and Java's initial target was an area that hadn't had a lot of development work.  Sun spent a lot of money pushing Java, and was only partially successful.  Now it probably couldn't be done again even by a major corporation.</p><p>The other main answer is make calling stuff written in C or C++ (or Java) trivial.Python has used this to great effect, and Ruby to a slightly lesser one.  Also note Jython, Groovy, Scala, etc.  But if you're after high performance, Java has the dead weight of an interpreter (i.e., virtual machine).  So that basically leaves easy linkage with C or C++.  And both are purely DREADFUL languages to link to, due to pointer/integer conversions and macros.  And callbacks.  Individual libraries can be wrapped, but it's not easy to craft global solutions that work nicely.  gcc has some compiler options that could be used to eliminate macros.  Presumably so do other compilers.  But they definitely aren't standardized.  And you're still left not knowing what's a pointer so you don't know what memory can be freed.</p><p>The result of this is that to get a new language into a workable state means a tremendous effort to wrap libraries.  And this needs to be done AFTER the language is stabilized.  And the people willing to work on this aren't the same people as the language implementers (who have their own jobs).</p><p>I looked over those language sites, and I couldn't see any sign that thoughts had been given to either Foreign Function Interfaces or wrapping external libraries.  Possibly they just used different terms, but I suspect not.  My suspicion is that the implementers aren't really interested in language use so much as proving a concept.  So THESE aren't the languages that we want, but they are test-beds for working out ideas that will later be imported into other languages.</p></htmltext>
<tokenext>The main problem faced by each new language is " How do I access all the stuff that 's already been done ?
" The " Do it over again " answer has n't been successful since Sun pushed Java , and Java 's initial target was an area that had n't had a lot of development work .
Sun spent a lot of money pushing Java , and was only partially successful .
Now it probably could n't be done again even by a major corporation.The other main answer is make calling stuff written in C or C + + ( or Java ) trivial.Python has used this to great effect , and Ruby to a slightly lesser one .
Also note Jython , Groovy , Scala , etc .
But if you 're after high performance , Java has the dead weight of an interpreter ( i.e. , virtual machine ) .
So that basically leaves easy linkage with C or C + + .
And both are purely DREADFUL languages to link to , due to pointer/integer conversions and macros .
And callbacks .
Individual libraries can be wrapped , but it 's not easy to craft global solutions that work nicely .
gcc has some compiler options that could be used to eliminate macros .
Presumably so do other compilers .
But they definitely are n't standardized .
And you 're still left not knowing what 's a pointer so you do n't know what memory can be freed.The result of this is that to get a new language into a workable state means a tremendous effort to wrap libraries .
And this needs to be done AFTER the language is stabilized .
And the people willing to work on this are n't the same people as the language implementers ( who have their own jobs ) .I looked over those language sites , and I could n't see any sign that thoughts had been given to either Foreign Function Interfaces or wrapping external libraries .
Possibly they just used different terms , but I suspect not .
My suspicion is that the implementers are n't really interested in language use so much as proving a concept .
So THESE are n't the languages that we want , but they are test-beds for working out ideas that will later be imported into other languages .</tokentext>
<sentencetext>The main problem faced by each new language is "How do I access all the stuff that's already been done?
"The "Do it over again" answer hasn't been successful since Sun pushed Java, and Java's initial target was an area that hadn't had a lot of development work.
Sun spent a lot of money pushing Java, and was only partially successful.
Now it probably couldn't be done again even by a major corporation.The other main answer is make calling stuff written in C or C++ (or Java) trivial.Python has used this to great effect, and Ruby to a slightly lesser one.
Also note Jython, Groovy, Scala, etc.
But if you're after high performance, Java has the dead weight of an interpreter (i.e., virtual machine).
So that basically leaves easy linkage with C or C++.
And both are purely DREADFUL languages to link to, due to pointer/integer conversions and macros.
And callbacks.
Individual libraries can be wrapped, but it's not easy to craft global solutions that work nicely.
gcc has some compiler options that could be used to eliminate macros.
Presumably so do other compilers.
But they definitely aren't standardized.
And you're still left not knowing what's a pointer so you don't know what memory can be freed.The result of this is that to get a new language into a workable state means a tremendous effort to wrap libraries.
And this needs to be done AFTER the language is stabilized.
And the people willing to work on this aren't the same people as the language implementers (who have their own jobs).I looked over those language sites, and I couldn't see any sign that thoughts had been given to either Foreign Function Interfaces or wrapping external libraries.
Possibly they just used different terms, but I suspect not.
My suspicion is that the implementers aren't really interested in language use so much as proving a concept.
So THESE aren't the languages that we want, but they are test-beds for working out ideas that will later be imported into other languages.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243689</id>
	<title>Re:Old languages designed for parallel processing?</title>
	<author>eulernet</author>
	<datestamp>1244365560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Ada was created by a french team: <a href="http://en.wikipedia.org/wiki/Ada\_(programming\_language)" title="wikipedia.org">http://en.wikipedia.org/wiki/Ada\_(programming\_language)</a> [wikipedia.org]</p><p>Four teams competed to create a new language suitable for the DoD, and the french team won.</p></htmltext>
<tokenext>Ada was created by a french team : http : //en.wikipedia.org/wiki/Ada \ _ ( programming \ _language ) [ wikipedia.org ] Four teams competed to create a new language suitable for the DoD , and the french team won .</tokentext>
<sentencetext>Ada was created by a french team: http://en.wikipedia.org/wiki/Ada\_(programming\_language) [wikipedia.org]Four teams competed to create a new language suitable for the DoD, and the french team won.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243127</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28248703</id>
	<title>Re:"for" or "while" loops</title>
	<author>Jedi Alec</author>
	<datestamp>1244457660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>"Any program with a "for" or "while" loop in which the results of one iteration"</i></p><p><i>Think about it... In most real world applications, a for or while loop do depend on some variable in a previous iteration. If something inside a loop does not change you have an infinite loop. Something has to change. In a for loop this is the index variable but in this case, something has changed.<br></i><br><b>perl alert!!!</b></p><p>while(my $shit = shift(@pile\_of\_shit))<br>{<br>smell\_like\_roses($shit);<br>}</p><p>In this example the state of the other slabs of shit don't matter one bit to the one being treated to smell like roses. I'm running 1 production line while I could be running 8 at the same time.</p><p>Honestly, it makes me cry each time I see a game max out 1 core of my i7 and stutter at the hard stuff when all the cpu power is just sitting there unused.</p></htmltext>
<tokenext>" Any program with a " for " or " while " loop in which the results of one iteration " Think about it... In most real world applications , a for or while loop do depend on some variable in a previous iteration .
If something inside a loop does not change you have an infinite loop .
Something has to change .
In a for loop this is the index variable but in this case , something has changed.perl alert ! !
! while ( my $ shit = shift ( @ pile \ _of \ _shit ) ) { smell \ _like \ _roses ( $ shit ) ; } In this example the state of the other slabs of shit do n't matter one bit to the one being treated to smell like roses .
I 'm running 1 production line while I could be running 8 at the same time.Honestly , it makes me cry each time I see a game max out 1 core of my i7 and stutter at the hard stuff when all the cpu power is just sitting there unused .</tokentext>
<sentencetext>"Any program with a "for" or "while" loop in which the results of one iteration"Think about it... In most real world applications, a for or while loop do depend on some variable in a previous iteration.
If something inside a loop does not change you have an infinite loop.
Something has to change.
In a for loop this is the index variable but in this case, something has changed.perl alert!!
!while(my $shit = shift(@pile\_of\_shit)){smell\_like\_roses($shit);}In this example the state of the other slabs of shit don't matter one bit to the one being treated to smell like roses.
I'm running 1 production line while I could be running 8 at the same time.Honestly, it makes me cry each time I see a game max out 1 core of my i7 and stutter at the hard stuff when all the cpu power is just sitting there unused.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28246683</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28253359</id>
	<title>Re:What's so hard?</title>
	<author>sjames</author>
	<datestamp>1244487180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Take one of those threads and make it into 10 that run in parallel. Threading at natural boundaries is easy. The hard part is when you do that and still want to utilize more CPUs.</p><p>Take a for loop that goes from (for example 0-999). Now spilt it into 4 threads that together cover that space. You could have thread 1 do 0-249, etc or you could interleave them (so thread one covers 0,4,8,12 while thread 2 does 1,5,9,13, etc). Now do it without obfuscating the flow of your code.</p><p>If the computation of N depends on the value of N-1, you're screwed unless you can somehow re-arrange the computation cleverly until that is no longer true. Sometimes you can, sometimes you can't. For example, you might be able to use 2 threads in lockstep such that a partial value of N based on independent variables is computed while the computation of N-1 is completed. If your locking is really efficient and you don't blow the cache out, that might even work. You could up to double the speed that way. Otherwise, you could easily end up slower than the serial implementation. Recognizing the difference before you spend hours (or days) experimenting can be really hard. Making sure your experiment will behave the same way with real world data can be interesting as well.</p><p>In other cases, a coarser grained parallelism will be the best you can hope for. The strategy there is to isolate the inter-thread dependencies to the border cases. At the borders, once again you'll need efficient locking. You'll also need to compute the optimum number of threads to use for a given array of data. Use too many or too few and you slow down. At the extremes on either end, you run slower than just having a single thread do it.</p></htmltext>
<tokenext>Take one of those threads and make it into 10 that run in parallel .
Threading at natural boundaries is easy .
The hard part is when you do that and still want to utilize more CPUs.Take a for loop that goes from ( for example 0-999 ) .
Now spilt it into 4 threads that together cover that space .
You could have thread 1 do 0-249 , etc or you could interleave them ( so thread one covers 0,4,8,12 while thread 2 does 1,5,9,13 , etc ) .
Now do it without obfuscating the flow of your code.If the computation of N depends on the value of N-1 , you 're screwed unless you can somehow re-arrange the computation cleverly until that is no longer true .
Sometimes you can , sometimes you ca n't .
For example , you might be able to use 2 threads in lockstep such that a partial value of N based on independent variables is computed while the computation of N-1 is completed .
If your locking is really efficient and you do n't blow the cache out , that might even work .
You could up to double the speed that way .
Otherwise , you could easily end up slower than the serial implementation .
Recognizing the difference before you spend hours ( or days ) experimenting can be really hard .
Making sure your experiment will behave the same way with real world data can be interesting as well.In other cases , a coarser grained parallelism will be the best you can hope for .
The strategy there is to isolate the inter-thread dependencies to the border cases .
At the borders , once again you 'll need efficient locking .
You 'll also need to compute the optimum number of threads to use for a given array of data .
Use too many or too few and you slow down .
At the extremes on either end , you run slower than just having a single thread do it .</tokentext>
<sentencetext>Take one of those threads and make it into 10 that run in parallel.
Threading at natural boundaries is easy.
The hard part is when you do that and still want to utilize more CPUs.Take a for loop that goes from (for example 0-999).
Now spilt it into 4 threads that together cover that space.
You could have thread 1 do 0-249, etc or you could interleave them (so thread one covers 0,4,8,12 while thread 2 does 1,5,9,13, etc).
Now do it without obfuscating the flow of your code.If the computation of N depends on the value of N-1, you're screwed unless you can somehow re-arrange the computation cleverly until that is no longer true.
Sometimes you can, sometimes you can't.
For example, you might be able to use 2 threads in lockstep such that a partial value of N based on independent variables is computed while the computation of N-1 is completed.
If your locking is really efficient and you don't blow the cache out, that might even work.
You could up to double the speed that way.
Otherwise, you could easily end up slower than the serial implementation.
Recognizing the difference before you spend hours (or days) experimenting can be really hard.
Making sure your experiment will behave the same way with real world data can be interesting as well.In other cases, a coarser grained parallelism will be the best you can hope for.
The strategy there is to isolate the inter-thread dependencies to the border cases.
At the borders, once again you'll need efficient locking.
You'll also need to compute the optimum number of threads to use for a given array of data.
Use too many or too few and you slow down.
At the extremes on either end, you run slower than just having a single thread do it.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243665</id>
	<title>Re:What's so hard?</title>
	<author>burnetd</author>
	<datestamp>1244365380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Of the top of my head, one....</p><p>#pragma omp for schedule(static)</p><p>while on OSX that'll use pthreads, and I believe on Linux too, on Windows it'll depend on the compiler used.</p></htmltext>
<tokenext>Of the top of my head , one.... # pragma omp for schedule ( static ) while on OSX that 'll use pthreads , and I believe on Linux too , on Windows it 'll depend on the compiler used .</tokentext>
<sentencetext>Of the top of my head, one....#pragma omp for schedule(static)while on OSX that'll use pthreads, and I believe on Linux too, on Windows it'll depend on the compiler used.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243181</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244173</id>
	<title>Re:Parallel is here to stay but not for every app</title>
	<author>nurb432</author>
	<datestamp>1244370180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If we all become part of some huge cloud and share our ( mostly mobile ) resources by default, it may apply even to the most lowly of text editors.</p></htmltext>
<tokenext>If we all become part of some huge cloud and share our ( mostly mobile ) resources by default , it may apply even to the most lowly of text editors .</tokentext>
<sentencetext>If we all become part of some huge cloud and share our ( mostly mobile ) resources by default, it may apply even to the most lowly of text editors.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28246683</id>
	<title>"for" or "while" loops</title>
	<author>wfstanle</author>
	<datestamp>1244391480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>"Any program with a "for" or "while" loop in which the results of one iteration"</p><p>Think about it... In most real world applications, a for or while loop do depend on some variable in a previous iteration.  If something inside a loop does not change you have an infinite loop.  Something has to change.  In a for loop this is the index variable but in this case, something has changed.</p></htmltext>
<tokenext>" Any program with a " for " or " while " loop in which the results of one iteration " Think about it... In most real world applications , a for or while loop do depend on some variable in a previous iteration .
If something inside a loop does not change you have an infinite loop .
Something has to change .
In a for loop this is the index variable but in this case , something has changed .</tokentext>
<sentencetext>"Any program with a "for" or "while" loop in which the results of one iteration"Think about it... In most real world applications, a for or while loop do depend on some variable in a previous iteration.
If something inside a loop does not change you have an infinite loop.
Something has to change.
In a for loop this is the index variable but in this case, something has changed.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243111</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243419</id>
	<title>Re:What's so hard?</title>
	<author>Anonymous</author>
	<datestamp>1244406720000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>
I guess this is where 'restrict' comes in.  If a, b and c can be determined as aliases and non-overlapping, the compiler may auto-vectorise that for you on an appropriate architecture.
</p><p>
That said, in <a href="http://en.wikipedia.org/wiki/Handel\_C" title="wikipedia.org" rel="nofollow">Handel C</a> [wikipedia.org], a dying dialect of C which targeted FPGA would let you do the following:
</p><p>

par (i = 0; i &lt; 1000; i++)<br>
    c[i] = a[i] + b[i];

</p><p>
This would build a massive amount of logic to perform the 1000 adders in parallel on an FPGA, but it's nice syntax.  The <tt>par</tt> could also be replaced with <tt>seq</tt> to make a sequential version (sill using lots of logic since it <tt>seq</tt> is like an unrolled loop).
</p></htmltext>
<tokenext>I guess this is where 'restrict ' comes in .
If a , b and c can be determined as aliases and non-overlapping , the compiler may auto-vectorise that for you on an appropriate architecture .
That said , in Handel C [ wikipedia.org ] , a dying dialect of C which targeted FPGA would let you do the following : par ( i = 0 ; i c [ i ] = a [ i ] + b [ i ] ; This would build a massive amount of logic to perform the 1000 adders in parallel on an FPGA , but it 's nice syntax .
The par could also be replaced with seq to make a sequential version ( sill using lots of logic since it seq is like an unrolled loop ) .</tokentext>
<sentencetext>
I guess this is where 'restrict' comes in.
If a, b and c can be determined as aliases and non-overlapping, the compiler may auto-vectorise that for you on an appropriate architecture.
That said, in Handel C [wikipedia.org], a dying dialect of C which targeted FPGA would let you do the following:


par (i = 0; i 
    c[i] = a[i] + b[i];


This would build a massive amount of logic to perform the 1000 adders in parallel on an FPGA, but it's nice syntax.
The par could also be replaced with seq to make a sequential version (sill using lots of logic since it seq is like an unrolled loop).
</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243181</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243109</id>
	<title>Re:What's so hard?</title>
	<author>Anonymous</author>
	<datestamp>1244404380000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>Do your programs ever leak memory? Did you have to work with a team of 100+ SWE's to write the program? Did you have technical specs to satisfy, or was this a weekend project? This is the difference between swimming 100 meters and sailing across the Pacific.</htmltext>
<tokenext>Do your programs ever leak memory ?
Did you have to work with a team of 100 + SWE 's to write the program ?
Did you have technical specs to satisfy , or was this a weekend project ?
This is the difference between swimming 100 meters and sailing across the Pacific .</tokentext>
<sentencetext>Do your programs ever leak memory?
Did you have to work with a team of 100+ SWE's to write the program?
Did you have technical specs to satisfy, or was this a weekend project?
This is the difference between swimming 100 meters and sailing across the Pacific.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28246697</id>
	<title>Re:Parallel is here to stay but not for every app</title>
	<author>Anonymous</author>
	<datestamp>1244391540000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>If making software run in parallel is a performance optimization, why not treat it just like any performance optimization, there to be implemented only for performance-critical pieces of code?  If the need is there, coders will have the motivation and the pay to implement it.  All we need is simplified message passing.</p></htmltext>
<tokenext>If making software run in parallel is a performance optimization , why not treat it just like any performance optimization , there to be implemented only for performance-critical pieces of code ?
If the need is there , coders will have the motivation and the pay to implement it .
All we need is simplified message passing .</tokentext>
<sentencetext>If making software run in parallel is a performance optimization, why not treat it just like any performance optimization, there to be implemented only for performance-critical pieces of code?
If the need is there, coders will have the motivation and the pay to implement it.
All we need is simplified message passing.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243487</id>
	<title>Re:Parallel programming is dead. No one uses it...</title>
	<author>hedwards</author>
	<datestamp>1244407200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>That's funny, because Fallout 3 for instance is enhanced for multicore. I'd assume there were other games.<br> <br>

I'd be skeptical of allowing smart compilers to do the programming for us, a compiler no matter how smart isn't the programmer.</htmltext>
<tokenext>That 's funny , because Fallout 3 for instance is enhanced for multicore .
I 'd assume there were other games .
I 'd be skeptical of allowing smart compilers to do the programming for us , a compiler no matter how smart is n't the programmer .</tokentext>
<sentencetext>That's funny, because Fallout 3 for instance is enhanced for multicore.
I'd assume there were other games.
I'd be skeptical of allowing smart compilers to do the programming for us, a compiler no matter how smart isn't the programmer.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243023</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244499</id>
	<title>Re:What's so hard?</title>
	<author>Anonymous</author>
	<datestamp>1244372160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>How many lines does it take you to parallelize this with pthreads in C?</p><p>for (i = 0; i &lt; 1000; i++)

  c[i] = a[i] + b[i];</p><p>If it takes you more than 2 lines, then your "language" is too hard to be used everywhere by everyone.</p></div><p>
Point 1: This is an absurdly simple example<br>
Point 2: assuming 1000 divides cleanly into variable 'threadcount' and you have a function called 'start\_the\_thread(int *, static int*, static int*, size)...
<br> <br>

for (i = 0; i &lt; 1000; i += (1000 / threadcount))<br>


   start\_the\_thread(c+i, a+i, b+i, 1000/threadcount);<br> <br>
<br>

Maybe it doesn't look clean, but it is 2 lines. And I'm sure someone else could do it a lot better that that.</p></div>
	</htmltext>
<tokenext>How many lines does it take you to parallelize this with pthreads in C ? for ( i = 0 ; i If it takes you more than 2 lines , then your " language " is too hard to be used everywhere by everyone .
Point 1 : This is an absurdly simple example Point 2 : assuming 1000 divides cleanly into variable 'threadcount ' and you have a function called 'start \ _the \ _thread ( int * , static int * , static int * , size ) .. . for ( i = 0 ; i start \ _the \ _thread ( c + i , a + i , b + i , 1000/threadcount ) ; Maybe it does n't look clean , but it is 2 lines .
And I 'm sure someone else could do it a lot better that that .</tokentext>
<sentencetext>How many lines does it take you to parallelize this with pthreads in C?for (i = 0; i If it takes you more than 2 lines, then your "language" is too hard to be used everywhere by everyone.
Point 1: This is an absurdly simple example
Point 2: assuming 1000 divides cleanly into variable 'threadcount' and you have a function called 'start\_the\_thread(int *, static int*, static int*, size)...
 

for (i = 0; i 


   start\_the\_thread(c+i, a+i, b+i, 1000/threadcount); 


Maybe it doesn't look clean, but it is 2 lines.
And I'm sure someone else could do it a lot better that that.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243181</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245553</id>
	<title>Re:What's so hard?</title>
	<author>Beezlebub33</author>
	<datestamp>1244380680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Why does it need a library?  why isn't this the compiler's job?
<br>
<br>
Does an operation have a side effect?  If not, then it can happen in parallel with other operations that are working on different data.  In this case, there isn't a side effect (unless you're in C++ land and you've changed the meaning of +) and each operation occurs on a different pieces of data.  So, if you have 1000 cores, they should all happen at the same time.   the problem is that 1)  you are going to be I/O bound and 2) this is a trivial example and in general it's much harder to determine if there are side effects.  Which is why C sucks for parallelism and Erlang rocks.</htmltext>
<tokenext>Why does it need a library ?
why is n't this the compiler 's job ?
Does an operation have a side effect ?
If not , then it can happen in parallel with other operations that are working on different data .
In this case , there is n't a side effect ( unless you 're in C + + land and you 've changed the meaning of + ) and each operation occurs on a different pieces of data .
So , if you have 1000 cores , they should all happen at the same time .
the problem is that 1 ) you are going to be I/O bound and 2 ) this is a trivial example and in general it 's much harder to determine if there are side effects .
Which is why C sucks for parallelism and Erlang rocks .</tokentext>
<sentencetext>Why does it need a library?
why isn't this the compiler's job?
Does an operation have a side effect?
If not, then it can happen in parallel with other operations that are working on different data.
In this case, there isn't a side effect (unless you're in C++ land and you've changed the meaning of +) and each operation occurs on a different pieces of data.
So, if you have 1000 cores, they should all happen at the same time.
the problem is that 1)  you are going to be I/O bound and 2) this is a trivial example and in general it's much harder to determine if there are side effects.
Which is why C sucks for parallelism and Erlang rocks.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243355</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243469</id>
	<title>Re:Clojure</title>
	<author>Cyberax</author>
	<datestamp>1244407080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Check Erlang<nobr> <wbr></nobr>;)</p></htmltext>
<tokenext>Check Erlang ; )</tokentext>
<sentencetext>Check Erlang ;)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243173</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244261</id>
	<title>Re:What's so hard?</title>
	<author>4D6963</author>
	<datestamp>1244370720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p> <i>MP scalability in software is hard, because you don't know (and shouldn't assume) how many CPU cores are present in the user's system.</i> </p><p>Well actually what I did (and what I think should be done) is to do just like with anything else and design it based on a variable, i.e. you know what you want if there's one core, you know what you want if there's 2, or 8, or 64, and based on that you write an algorithm that takes the number of detected cores into account and behaves as you want it to.</p></htmltext>
<tokenext>MP scalability in software is hard , because you do n't know ( and should n't assume ) how many CPU cores are present in the user 's system .
Well actually what I did ( and what I think should be done ) is to do just like with anything else and design it based on a variable , i.e .
you know what you want if there 's one core , you know what you want if there 's 2 , or 8 , or 64 , and based on that you write an algorithm that takes the number of detected cores into account and behaves as you want it to .</tokentext>
<sentencetext> MP scalability in software is hard, because you don't know (and shouldn't assume) how many CPU cores are present in the user's system.
Well actually what I did (and what I think should be done) is to do just like with anything else and design it based on a variable, i.e.
you know what you want if there's one core, you know what you want if there's 2, or 8, or 64, and based on that you write an algorithm that takes the number of detected cores into account and behaves as you want it to.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243359</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243603</id>
	<title>Re:Parallel is here to stay but not for every app</title>
	<author>AuMatar</author>
	<datestamp>1244408040000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>And how many of those cores are above 2\% utilization for 90\% of the day?  Parallelization on the desktop is a solution is search of a problem-  we have in a single core dozens of times what the average user needs.  My email, web browsing, word processor, etc aren't cpu limited.  They're network limited, and after that they're user limited (a human can only read so many slashdot stories a minute).  There's no point in anything other than servers having 4 or 8 cores.  But if Intel doesn't fool people into thinking they need new computers their revenue will go down, so 16 core desktops next year it is.</p></htmltext>
<tokenext>And how many of those cores are above 2 \ % utilization for 90 \ % of the day ?
Parallelization on the desktop is a solution is search of a problem- we have in a single core dozens of times what the average user needs .
My email , web browsing , word processor , etc are n't cpu limited .
They 're network limited , and after that they 're user limited ( a human can only read so many slashdot stories a minute ) .
There 's no point in anything other than servers having 4 or 8 cores .
But if Intel does n't fool people into thinking they need new computers their revenue will go down , so 16 core desktops next year it is .</tokentext>
<sentencetext>And how many of those cores are above 2\% utilization for 90\% of the day?
Parallelization on the desktop is a solution is search of a problem-  we have in a single core dozens of times what the average user needs.
My email, web browsing, word processor, etc aren't cpu limited.
They're network limited, and after that they're user limited (a human can only read so many slashdot stories a minute).
There's no point in anything other than servers having 4 or 8 cores.
But if Intel doesn't fool people into thinking they need new computers their revenue will go down, so 16 core desktops next year it is.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243223</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28258055</id>
	<title>Re:Parallel programming is dead. No one uses it...</title>
	<author>DragonWriter</author>
	<datestamp>1244461200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>I'd be skeptical of allowing smart compilers to do the programming for us, a compiler no matter how smart isn't the programmer.</p></div></blockquote><p>All a compiler does is write programming code, given an input which is also programming code (usually, in different languages.)</p><p>So, if you don't want a compiler to do that for you, do you hand code all your object code, and then just run a linker?</p></div>
	</htmltext>
<tokenext>I 'd be skeptical of allowing smart compilers to do the programming for us , a compiler no matter how smart is n't the programmer.All a compiler does is write programming code , given an input which is also programming code ( usually , in different languages .
) So , if you do n't want a compiler to do that for you , do you hand code all your object code , and then just run a linker ?</tokentext>
<sentencetext>I'd be skeptical of allowing smart compilers to do the programming for us, a compiler no matter how smart isn't the programmer.All a compiler does is write programming code, given an input which is also programming code (usually, in different languages.
)So, if you don't want a compiler to do that for you, do you hand code all your object code, and then just run a linker?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243487</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243525</id>
	<title>Re:Parallel is here to stay but not for every app</title>
	<author>Moochman</author>
	<datestamp>1244407500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I think he means "not going anywhere" as in "here to stay". In other words he's agreeing with you.</p><p>But yeah, I read it the other way too the first time around.</p></htmltext>
<tokenext>I think he means " not going anywhere " as in " here to stay " .
In other words he 's agreeing with you.But yeah , I read it the other way too the first time around .</tokentext>
<sentencetext>I think he means "not going anywhere" as in "here to stay".
In other words he's agreeing with you.But yeah, I read it the other way too the first time around.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243223</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243023</id>
	<title>Parallel programming is dead.  No one uses it....</title>
	<author>Anonymous</author>
	<datestamp>1244403780000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>except scientists who use supercomputers.</p><p>gpgpu.org</p><p>Threading i don't count as parallel processing for the desktop. I don't even  hear of any games or applications built for parallel.</p><p>Something like Itanium with the compiler doing all the hard work was interesting - instruction level parallelism.</p><p>We may need to build smart compilers that do the programming for us. All we will do is describe what we want.</p></htmltext>
<tokenext>except scientists who use supercomputers.gpgpu.orgThreading i do n't count as parallel processing for the desktop .
I do n't even hear of any games or applications built for parallel.Something like Itanium with the compiler doing all the hard work was interesting - instruction level parallelism.We may need to build smart compilers that do the programming for us .
All we will do is describe what we want .</tokentext>
<sentencetext>except scientists who use supercomputers.gpgpu.orgThreading i don't count as parallel processing for the desktop.
I don't even  hear of any games or applications built for parallel.Something like Itanium with the compiler doing all the hard work was interesting - instruction level parallelism.We may need to build smart compilers that do the programming for us.
All we will do is describe what we want.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28249055</id>
	<title>I hate it when ...</title>
	<author>Anonymous</author>
	<datestamp>1244462160000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I hate it when I get asked if I've finished my Forking program by the boss.  You think he'd mind his language!</p></htmltext>
<tokenext>I hate it when I get asked if I 've finished my Forking program by the boss .
You think he 'd mind his language !</tokentext>
<sentencetext>I hate it when I get asked if I've finished my Forking program by the boss.
You think he'd mind his language!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28249829</id>
	<title>Re:I'm waiting for parallel libs for R</title>
	<author>Anonymous</author>
	<datestamp>1244469540000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Try 'multicore'  I've used it successfully and it is really easy to use  Ideally I'd like to get RScaLAPACK to work, but I haven't spent much time trying beyond finding out that the RScaLAPACK rpm in the Fedora repository doesn't seem to work on my machine.  gl</p></htmltext>
<tokenext>Try 'multicore ' I 've used it successfully and it is really easy to use Ideally I 'd like to get RScaLAPACK to work , but I have n't spent much time trying beyond finding out that the RScaLAPACK rpm in the Fedora repository does n't seem to work on my machine .
gl</tokentext>
<sentencetext>Try 'multicore'  I've used it successfully and it is really easy to use  Ideally I'd like to get RScaLAPACK to work, but I haven't spent much time trying beyond finding out that the RScaLAPACK rpm in the Fedora repository doesn't seem to work on my machine.
gl</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242943</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243223</id>
	<title>Re:Parallel is here to stay but not for every app</title>
	<author>global\_diffusion</author>
	<datestamp>1244405100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>"Parallel is not going to go anywhere..."<br> <br>

Really?  Look inside any machine nowadays.  I'm working on an 8 core machine right now.  The individual cores aren't going to get that much faster in the years to come, but the number of cores in a given processor is going to increase dramatically.  Unless you want your programs to stay at the same execution speed for the next 5-10 years, you need parallel.  And what we need is languages and compilers that abstract away the actual hard work so that anybody can make a parallel program.</htmltext>
<tokenext>" Parallel is not going to go anywhere... " Really ?
Look inside any machine nowadays .
I 'm working on an 8 core machine right now .
The individual cores are n't going to get that much faster in the years to come , but the number of cores in a given processor is going to increase dramatically .
Unless you want your programs to stay at the same execution speed for the next 5-10 years , you need parallel .
And what we need is languages and compilers that abstract away the actual hard work so that anybody can make a parallel program .</tokentext>
<sentencetext>"Parallel is not going to go anywhere..." 

Really?
Look inside any machine nowadays.
I'm working on an 8 core machine right now.
The individual cores aren't going to get that much faster in the years to come, but the number of cores in a given processor is going to increase dramatically.
Unless you want your programs to stay at the same execution speed for the next 5-10 years, you need parallel.
And what we need is languages and compilers that abstract away the actual hard work so that anybody can make a parallel program.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243173</id>
	<title>Clojure</title>
	<author>Anonymous</author>
	<datestamp>1244404800000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext>Check out <a href="http://clojure.org/" title="clojure.org">Clojure</a> [clojure.org]. The only programming language around that really addresses the issue of programming in a multi-core environment. It's also quite a sweet language besides that.</htmltext>
<tokenext>Check out Clojure [ clojure.org ] .
The only programming language around that really addresses the issue of programming in a multi-core environment .
It 's also quite a sweet language besides that .</tokentext>
<sentencetext>Check out Clojure [clojure.org].
The only programming language around that really addresses the issue of programming in a multi-core environment.
It's also quite a sweet language besides that.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243709</id>
	<title>Re:Awful example in the article</title>
	<author>grantham</author>
	<datestamp>1244365920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Why would you want the withdrawal and balance check to run concurrently?</p></div><p>Because it might be too much trouble to get both processes to run on the same node.  With proper barriers, you can make sure they get run in the correct order.</p></div>
	</htmltext>
<tokenext>Why would you want the withdrawal and balance check to run concurrently ? Because it might be too much trouble to get both processes to run on the same node .
With proper barriers , you can make sure they get run in the correct order .</tokentext>
<sentencetext>Why would you want the withdrawal and balance check to run concurrently?Because it might be too much trouble to get both processes to run on the same node.
With proper barriers, you can make sure they get run in the correct order.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243129</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245897</id>
	<title>Re:Parallel is here to stay but not for every app</title>
	<author>ceoyoyo</author>
	<datestamp>1244383620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>And how many of those cores are above 2\% utilization for 90\% of the day?</i></p><p>It doesn't matter.  Nobody cares how utilized their cores are for most of the day.  The question is, how hard do they work when the user comes along and decides to do something that requires them.</p><p>Decoding 1080p H.264 video is something I could use an extra core or two for.  That's not such an esoteric task.</p></htmltext>
<tokenext>And how many of those cores are above 2 \ % utilization for 90 \ % of the day ? It does n't matter .
Nobody cares how utilized their cores are for most of the day .
The question is , how hard do they work when the user comes along and decides to do something that requires them.Decoding 1080p H.264 video is something I could use an extra core or two for .
That 's not such an esoteric task .</tokentext>
<sentencetext>And how many of those cores are above 2\% utilization for 90\% of the day?It doesn't matter.
Nobody cares how utilized their cores are for most of the day.
The question is, how hard do they work when the user comes along and decides to do something that requires them.Decoding 1080p H.264 video is something I could use an extra core or two for.
That's not such an esoteric task.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243603</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244031</id>
	<title>Re:What's so hard?</title>
	<author>Salamander</author>
	<datestamp>1244369220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Spawning threads to handle isolated tasks within a single address space isn't all that hard.  Handling interrelated tasks across more processors than could possibly share one address space, doing it correctly so it doesn't have deadlocks or race conditions, distributing the work so there aren't performance-killing bottlenecks even in a non-trivial system topology handling an ever-varying work distribution, etc. . . . that's where things get just a bit more challenging, and what the newer crop of languages are supposed to handle.  Personally I don't think they help all that much with more than a small class of problems (mostly those that are heavily oriented toward dealing with some kind of regular array), but it's silly to dismiss them without even understanding their problem domain.</p><p>On a slightly different note, BTW, parallel programming hasn't become more difficult.  It has merely become more common.  Whereas it used to be the domain of a few with proper training, now a whole bunch of barely-competent schleps are trying to do it as well.  They see pthread\_create and it blows their mind, and they think now they're masters of parallel programming, hardly aware that creating threads is to parallel programming what the "if" statement is to programming in general - the very first door you have to go through, before you can even realize how much more there is to know.</p></htmltext>
<tokenext>Spawning threads to handle isolated tasks within a single address space is n't all that hard .
Handling interrelated tasks across more processors than could possibly share one address space , doing it correctly so it does n't have deadlocks or race conditions , distributing the work so there are n't performance-killing bottlenecks even in a non-trivial system topology handling an ever-varying work distribution , etc .
. .
. that 's where things get just a bit more challenging , and what the newer crop of languages are supposed to handle .
Personally I do n't think they help all that much with more than a small class of problems ( mostly those that are heavily oriented toward dealing with some kind of regular array ) , but it 's silly to dismiss them without even understanding their problem domain.On a slightly different note , BTW , parallel programming has n't become more difficult .
It has merely become more common .
Whereas it used to be the domain of a few with proper training , now a whole bunch of barely-competent schleps are trying to do it as well .
They see pthread \ _create and it blows their mind , and they think now they 're masters of parallel programming , hardly aware that creating threads is to parallel programming what the " if " statement is to programming in general - the very first door you have to go through , before you can even realize how much more there is to know .</tokentext>
<sentencetext>Spawning threads to handle isolated tasks within a single address space isn't all that hard.
Handling interrelated tasks across more processors than could possibly share one address space, doing it correctly so it doesn't have deadlocks or race conditions, distributing the work so there aren't performance-killing bottlenecks even in a non-trivial system topology handling an ever-varying work distribution, etc.
. .
. that's where things get just a bit more challenging, and what the newer crop of languages are supposed to handle.
Personally I don't think they help all that much with more than a small class of problems (mostly those that are heavily oriented toward dealing with some kind of regular array), but it's silly to dismiss them without even understanding their problem domain.On a slightly different note, BTW, parallel programming hasn't become more difficult.
It has merely become more common.
Whereas it used to be the domain of a few with proper training, now a whole bunch of barely-competent schleps are trying to do it as well.
They see pthread\_create and it blows their mind, and they think now they're masters of parallel programming, hardly aware that creating threads is to parallel programming what the "if" statement is to programming in general - the very first door you have to go through, before you can even realize how much more there is to know.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28247277</id>
	<title>Re:Clojure</title>
	<author>johanatan</author>
	<datestamp>1244397960000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>STM has been implemented in Haskell (and costs a 20-30\% penalty) but as far as I know, the actor model is from Erlang, not Haskell.</htmltext>
<tokenext>STM has been implemented in Haskell ( and costs a 20-30 \ % penalty ) but as far as I know , the actor model is from Erlang , not Haskell .</tokentext>
<sentencetext>STM has been implemented in Haskell (and costs a 20-30\% penalty) but as far as I know, the actor model is from Erlang, not Haskell.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244623</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243275</id>
	<title>Re:Parallel is here to stay but not for every app</title>
	<author>gbjbaanb</author>
	<datestamp>1244405520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>ah, but all of those server-side apps are effectively doing a single task, multiple times - ie, each request occurs in a different thread, they do not split 1 request onto several CPUs. That's what all this talk of 'desktop parallelism' is all about.</p><p>So now everyone sees multiple cores on the desktop and think to themselves, that data grid is populating really slowly.. I know, we need to parallelise it, that'll make it go faster! (yeah, sure it will)</p><p>I'm sure there are tasks that will benefit from parallel processing (I think of map routing, not that Google's directions are particularly slow) but the vast majority simply won't be worth the effort to code.</p></htmltext>
<tokenext>ah , but all of those server-side apps are effectively doing a single task , multiple times - ie , each request occurs in a different thread , they do not split 1 request onto several CPUs .
That 's what all this talk of 'desktop parallelism ' is all about.So now everyone sees multiple cores on the desktop and think to themselves , that data grid is populating really slowly.. I know , we need to parallelise it , that 'll make it go faster !
( yeah , sure it will ) I 'm sure there are tasks that will benefit from parallel processing ( I think of map routing , not that Google 's directions are particularly slow ) but the vast majority simply wo n't be worth the effort to code .</tokentext>
<sentencetext>ah, but all of those server-side apps are effectively doing a single task, multiple times - ie, each request occurs in a different thread, they do not split 1 request onto several CPUs.
That's what all this talk of 'desktop parallelism' is all about.So now everyone sees multiple cores on the desktop and think to themselves, that data grid is populating really slowly.. I know, we need to parallelise it, that'll make it go faster!
(yeah, sure it will)I'm sure there are tasks that will benefit from parallel processing (I think of map routing, not that Google's directions are particularly slow) but the vast majority simply won't be worth the effort to code.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243091</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243153</id>
	<title>combine with netbook tech</title>
	<author>asadodetira</author>
	<datestamp>1244404680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I see some potential in combining innovations meant for the netbooks with multiple processors.
Low power &amp; lightweight software may mix well with multiple CPUSs.</htmltext>
<tokenext>I see some potential in combining innovations meant for the netbooks with multiple processors .
Low power &amp; lightweight software may mix well with multiple CPUSs .</tokentext>
<sentencetext>I see some potential in combining innovations meant for the netbooks with multiple processors.
Low power &amp; lightweight software may mix well with multiple CPUSs.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28249035</id>
	<title>Overly-pervasive imperative programming at fault?</title>
	<author>amn108</author>
	<datestamp>1244461920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>A big part of the problem as we have it already is rooted in the ways operate computers, ever since it was the only way to do so (slow, little memory, etc).</p><p>We tell our computers not only what to do but out of sheer paranoia HOW to do it. This is because we are not confident we have taught (programmed) the computer to make good decisions and map the road of solutions from the problem to that what we want computer to do, so we employ languages like C to map out every turn the program blindly has to take, no matter the road it is put on really. As most of the programming world, out of habit and what not, employs most slavish forms of imperative programming, what is the chances that a compiler-translator or an operating system (or the underlying hardware which IS INHERENTLY IMPERATIVE by nature) are able to override the decisions that the programmer itself has explicitly made on its behalf and thus ordered it to follow strictly? Granted, some compilers/translators do have freedom of interpretation, but is is also subject to language specification, I mean if you express A is implemented in a B way, then there is so much the compiler can do and no more.</p><p>To make an analogy, if your teenage son/daughter interns at your law firm as a your private secretary of sorts, when told "fetch me that contract from the finance dep. on second floor and bring me a good pencil from third for signing it" he/she might not catch and comment on the fact that you don't sign a contract with a pencil, and just follow through your order blindly. If you taught him/her the art of contracts though, he/she might become a much better secretary, might eventually replace you as well<nobr> <wbr></nobr>:-)</p><p>When the programmer assumes he knows most (of all parties involved) EXACTLY HOW the program should solve the problem across time/space, not only for their own testing hardware but for all the combinations of architectures and environments of his programs' users, then those chances are even slimmer. This is program optimization problem, which surfaces when we try to compile our serial source code to run on very parallel systems.</p><p>So here we are, discussing solving ways to parallelize our solutions to common problems, when we are like a one-eyed master who tells a slave not only what to do but also how to do it, instead of educating the slave so that he which has better depth-perception can better guide the master. And I am not talking about sloppy out-of-college programmers, this happens to the very best, because the habit was there for so long, I mean we had to tell the slave what to do because historically, that slave character was much more blind than the master and severely handicapped in many areas.</p><p>In essence, if we put parallel programming paradigms into an imperative language, how is this going to prevent even great programmers from assuming too much? We need to teach computers how to map the solution themselves, with us only specifying the constraints of such solution, or goals of the program. You might say that such assumptions on human part are always a necessity, because we are just not there yet to have sufficiently intelligent HCI translators, but we should try nevertheless, for the sake of solving several problems at once with one broad look at things. Like one guy here said, how sad is that opening a bunch of pages in Firefox that do absolutely nothing maxes out a modern multi-core, superscalar, out-of-order executing CPU. Is it the faul of a) Firefox slavishly told what to do by programmers that wrote its C code? b) Operating system slavishly doing what kernel calls coming from Firefox tell it do? or c) the underlying hardware slavishly doing what the CPU tells it to do? I'd guess all of the above are equally involved. But can you blame either? All three are, by design, doing their job as they are told.</p></htmltext>
<tokenext>A big part of the problem as we have it already is rooted in the ways operate computers , ever since it was the only way to do so ( slow , little memory , etc ) .We tell our computers not only what to do but out of sheer paranoia HOW to do it .
This is because we are not confident we have taught ( programmed ) the computer to make good decisions and map the road of solutions from the problem to that what we want computer to do , so we employ languages like C to map out every turn the program blindly has to take , no matter the road it is put on really .
As most of the programming world , out of habit and what not , employs most slavish forms of imperative programming , what is the chances that a compiler-translator or an operating system ( or the underlying hardware which IS INHERENTLY IMPERATIVE by nature ) are able to override the decisions that the programmer itself has explicitly made on its behalf and thus ordered it to follow strictly ?
Granted , some compilers/translators do have freedom of interpretation , but is is also subject to language specification , I mean if you express A is implemented in a B way , then there is so much the compiler can do and no more.To make an analogy , if your teenage son/daughter interns at your law firm as a your private secretary of sorts , when told " fetch me that contract from the finance dep .
on second floor and bring me a good pencil from third for signing it " he/she might not catch and comment on the fact that you do n't sign a contract with a pencil , and just follow through your order blindly .
If you taught him/her the art of contracts though , he/she might become a much better secretary , might eventually replace you as well : - ) When the programmer assumes he knows most ( of all parties involved ) EXACTLY HOW the program should solve the problem across time/space , not only for their own testing hardware but for all the combinations of architectures and environments of his programs ' users , then those chances are even slimmer .
This is program optimization problem , which surfaces when we try to compile our serial source code to run on very parallel systems.So here we are , discussing solving ways to parallelize our solutions to common problems , when we are like a one-eyed master who tells a slave not only what to do but also how to do it , instead of educating the slave so that he which has better depth-perception can better guide the master .
And I am not talking about sloppy out-of-college programmers , this happens to the very best , because the habit was there for so long , I mean we had to tell the slave what to do because historically , that slave character was much more blind than the master and severely handicapped in many areas.In essence , if we put parallel programming paradigms into an imperative language , how is this going to prevent even great programmers from assuming too much ?
We need to teach computers how to map the solution themselves , with us only specifying the constraints of such solution , or goals of the program .
You might say that such assumptions on human part are always a necessity , because we are just not there yet to have sufficiently intelligent HCI translators , but we should try nevertheless , for the sake of solving several problems at once with one broad look at things .
Like one guy here said , how sad is that opening a bunch of pages in Firefox that do absolutely nothing maxes out a modern multi-core , superscalar , out-of-order executing CPU .
Is it the faul of a ) Firefox slavishly told what to do by programmers that wrote its C code ?
b ) Operating system slavishly doing what kernel calls coming from Firefox tell it do ?
or c ) the underlying hardware slavishly doing what the CPU tells it to do ?
I 'd guess all of the above are equally involved .
But can you blame either ?
All three are , by design , doing their job as they are told .</tokentext>
<sentencetext>A big part of the problem as we have it already is rooted in the ways operate computers, ever since it was the only way to do so (slow, little memory, etc).We tell our computers not only what to do but out of sheer paranoia HOW to do it.
This is because we are not confident we have taught (programmed) the computer to make good decisions and map the road of solutions from the problem to that what we want computer to do, so we employ languages like C to map out every turn the program blindly has to take, no matter the road it is put on really.
As most of the programming world, out of habit and what not, employs most slavish forms of imperative programming, what is the chances that a compiler-translator or an operating system (or the underlying hardware which IS INHERENTLY IMPERATIVE by nature) are able to override the decisions that the programmer itself has explicitly made on its behalf and thus ordered it to follow strictly?
Granted, some compilers/translators do have freedom of interpretation, but is is also subject to language specification, I mean if you express A is implemented in a B way, then there is so much the compiler can do and no more.To make an analogy, if your teenage son/daughter interns at your law firm as a your private secretary of sorts, when told "fetch me that contract from the finance dep.
on second floor and bring me a good pencil from third for signing it" he/she might not catch and comment on the fact that you don't sign a contract with a pencil, and just follow through your order blindly.
If you taught him/her the art of contracts though, he/she might become a much better secretary, might eventually replace you as well :-)When the programmer assumes he knows most (of all parties involved) EXACTLY HOW the program should solve the problem across time/space, not only for their own testing hardware but for all the combinations of architectures and environments of his programs' users, then those chances are even slimmer.
This is program optimization problem, which surfaces when we try to compile our serial source code to run on very parallel systems.So here we are, discussing solving ways to parallelize our solutions to common problems, when we are like a one-eyed master who tells a slave not only what to do but also how to do it, instead of educating the slave so that he which has better depth-perception can better guide the master.
And I am not talking about sloppy out-of-college programmers, this happens to the very best, because the habit was there for so long, I mean we had to tell the slave what to do because historically, that slave character was much more blind than the master and severely handicapped in many areas.In essence, if we put parallel programming paradigms into an imperative language, how is this going to prevent even great programmers from assuming too much?
We need to teach computers how to map the solution themselves, with us only specifying the constraints of such solution, or goals of the program.
You might say that such assumptions on human part are always a necessity, because we are just not there yet to have sufficiently intelligent HCI translators, but we should try nevertheless, for the sake of solving several problems at once with one broad look at things.
Like one guy here said, how sad is that opening a bunch of pages in Firefox that do absolutely nothing maxes out a modern multi-core, superscalar, out-of-order executing CPU.
Is it the faul of a) Firefox slavishly told what to do by programmers that wrote its C code?
b) Operating system slavishly doing what kernel calls coming from Firefox tell it do?
or c) the underlying hardware slavishly doing what the CPU tells it to do?
I'd guess all of the above are equally involved.
But can you blame either?
All three are, by design, doing their job as they are told.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243289</id>
	<title>Re:Parallel programming is dead. No one uses it...</title>
	<author>quanticle</author>
	<datestamp>1244405700000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>If threading isn't parallelism, then what is?  At what level of separation between separate streams of execution does an application become "parallel"?</p></htmltext>
<tokenext>If threading is n't parallelism , then what is ?
At what level of separation between separate streams of execution does an application become " parallel " ?</tokentext>
<sentencetext>If threading isn't parallelism, then what is?
At what level of separation between separate streams of execution does an application become "parallel"?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243023</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243467</id>
	<title>Re:Parallel programming is dead. No one uses it...</title>
	<author>sam\_handelman</author>
	<datestamp>1244407080000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext> Parent is kinda flamebait, and it's exactly the opposite of my experience.<br><br>
&nbsp; Scientists (I am one) who also write some of their own code, have much better things to do with our time than to try and make the software efficient.  When we figure out what we want done, we hand it over to professional programmers who, if the cost:benefit analysis works out, will parallelize or optimize it as they're told is needed.  Even lousy programmers are expensive, and hardware is cheap.<br><br>
&nbsp; I 100\% agree with the end of his statement - was it 10, 15 years ago scientific computing was still done in fortran FOR A REASON - the optimizing compiler didn't completely suck?  Some scientific computing is still done in FORTRAN but that's been purely a legacy thing since the optimizing compilers for C caught up.  I'm sure someone clever will find some way to get an interpreted language to figure out what depends on what and parallelize your code for you.  This is a very hard problem to do perfectly, but sensible people will quickly realize that's okay.  For some cases, I can beat an optimizing compiler by writing assembly - am I ever going to do that? Hell no.<br><br>
&nbsp; Now, this may result in additional good coding practices which will be required of us so that the optimizing compiler can make easier sense of our code.  Might it be lower overhead to create an optimization friendly programming language, which I suspect will end up amounting to making such practices an explicit requirement?  Probably not, but it depends on how closely these new programming languages adhere to existing languages (I haven't looked at either example discussed in the article.)</htmltext>
<tokenext>Parent is kinda flamebait , and it 's exactly the opposite of my experience .
  Scientists ( I am one ) who also write some of their own code , have much better things to do with our time than to try and make the software efficient .
When we figure out what we want done , we hand it over to professional programmers who , if the cost : benefit analysis works out , will parallelize or optimize it as they 're told is needed .
Even lousy programmers are expensive , and hardware is cheap .
  I 100 \ % agree with the end of his statement - was it 10 , 15 years ago scientific computing was still done in fortran FOR A REASON - the optimizing compiler did n't completely suck ?
Some scientific computing is still done in FORTRAN but that 's been purely a legacy thing since the optimizing compilers for C caught up .
I 'm sure someone clever will find some way to get an interpreted language to figure out what depends on what and parallelize your code for you .
This is a very hard problem to do perfectly , but sensible people will quickly realize that 's okay .
For some cases , I can beat an optimizing compiler by writing assembly - am I ever going to do that ?
Hell no .
  Now , this may result in additional good coding practices which will be required of us so that the optimizing compiler can make easier sense of our code .
Might it be lower overhead to create an optimization friendly programming language , which I suspect will end up amounting to making such practices an explicit requirement ?
Probably not , but it depends on how closely these new programming languages adhere to existing languages ( I have n't looked at either example discussed in the article .
)</tokentext>
<sentencetext> Parent is kinda flamebait, and it's exactly the opposite of my experience.
  Scientists (I am one) who also write some of their own code, have much better things to do with our time than to try and make the software efficient.
When we figure out what we want done, we hand it over to professional programmers who, if the cost:benefit analysis works out, will parallelize or optimize it as they're told is needed.
Even lousy programmers are expensive, and hardware is cheap.
  I 100\% agree with the end of his statement - was it 10, 15 years ago scientific computing was still done in fortran FOR A REASON - the optimizing compiler didn't completely suck?
Some scientific computing is still done in FORTRAN but that's been purely a legacy thing since the optimizing compilers for C caught up.
I'm sure someone clever will find some way to get an interpreted language to figure out what depends on what and parallelize your code for you.
This is a very hard problem to do perfectly, but sensible people will quickly realize that's okay.
For some cases, I can beat an optimizing compiler by writing assembly - am I ever going to do that?
Hell no.
  Now, this may result in additional good coding practices which will be required of us so that the optimizing compiler can make easier sense of our code.
Might it be lower overhead to create an optimization friendly programming language, which I suspect will end up amounting to making such practices an explicit requirement?
Probably not, but it depends on how closely these new programming languages adhere to existing languages (I haven't looked at either example discussed in the article.
)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243023</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245487</id>
	<title>Re:Parallel programming is dead. No one uses it...</title>
	<author>wirelessbuzzers</author>
	<datestamp>1244379960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Threading isn't parallelism, it's concurrency.  They aren't the same thing.</p><p>A concurrent program is logically divided into several interacting threads.  Concurrency is about the program's semantics, not its performance.</p><p>A parallel program is physically divided into several (not necessarily interacting) operations (usually threads, but might be vector ops) which run at the same time on different pieces of hardware.  Parallelism is about the program's performance, not its semantics.</p><p>A web browser should be concurrent in that the network thread shouldn't block the Javascript thread shouldn't block the interface.  It's less important that it be parallel, though this will improve performance and probably responsiveness when dealing with Javascript-intensive sites and flash movies.</p><p>A web server should usually be parallel for better performance. it's not particularly concurrent because its threads don't interact much.</p><p>It happens that writing a program concurrently makes it easier to parallelize, but it doesn't guarantee that it will be run in parallel.  For example, Concurrent ML performs pretty well despite not supporting SMP.</p></htmltext>
<tokenext>Threading is n't parallelism , it 's concurrency .
They are n't the same thing.A concurrent program is logically divided into several interacting threads .
Concurrency is about the program 's semantics , not its performance.A parallel program is physically divided into several ( not necessarily interacting ) operations ( usually threads , but might be vector ops ) which run at the same time on different pieces of hardware .
Parallelism is about the program 's performance , not its semantics.A web browser should be concurrent in that the network thread should n't block the Javascript thread should n't block the interface .
It 's less important that it be parallel , though this will improve performance and probably responsiveness when dealing with Javascript-intensive sites and flash movies.A web server should usually be parallel for better performance .
it 's not particularly concurrent because its threads do n't interact much.It happens that writing a program concurrently makes it easier to parallelize , but it does n't guarantee that it will be run in parallel .
For example , Concurrent ML performs pretty well despite not supporting SMP .</tokentext>
<sentencetext>Threading isn't parallelism, it's concurrency.
They aren't the same thing.A concurrent program is logically divided into several interacting threads.
Concurrency is about the program's semantics, not its performance.A parallel program is physically divided into several (not necessarily interacting) operations (usually threads, but might be vector ops) which run at the same time on different pieces of hardware.
Parallelism is about the program's performance, not its semantics.A web browser should be concurrent in that the network thread shouldn't block the Javascript thread shouldn't block the interface.
It's less important that it be parallel, though this will improve performance and probably responsiveness when dealing with Javascript-intensive sites and flash movies.A web server should usually be parallel for better performance.
it's not particularly concurrent because its threads don't interact much.It happens that writing a program concurrently makes it easier to parallelize, but it doesn't guarantee that it will be run in parallel.
For example, Concurrent ML performs pretty well despite not supporting SMP.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243289</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28249225</id>
	<title>Re:I'm waiting for parallel libs for R</title>
	<author>Anonymous</author>
	<datestamp>1244464140000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>use snow or snowfall.  this should allow you to parrelize with ease</p></htmltext>
<tokenext>use snow or snowfall .
this should allow you to parrelize with ease</tokentext>
<sentencetext>use snow or snowfall.
this should allow you to parrelize with ease</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242943</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243855</id>
	<title>Re:What's so hard?</title>
	<author>Lord Crc</author>
	<datestamp>1244367600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>OK, maybe what I did was simple enough, but I just don't see what's so inherently hard about parellel programming. Surely I am missing something.</p></div><p>For me, the two things that are hardest are designing an efficient parallel algorithm for the target platform and ensuring fast but proper synchronization.</p><p>For instance, if your target is a GPU, then you have a bunch of execution units in parallel, but communication between them is limited. You have to take this into consideration when designing the algorithm.</p><p>If your target is regular CPU's, then you might have a handful for execution units and communication can be fast. However you need to ensure proper synchronization. Locks can be very expensive on some platforms, and so you want to reduce the use of them, especially in highly congested parts of your application. However this can be very difficult to do correctly, so that there aren't any race conditions or data corruption.</p><p>In general, using threads can be very easy however for all but the most trivial issues it can be a bit tricky to do it efficiently, so that it's actually worthwhile to use threads. Imho.</p></div>
	</htmltext>
<tokenext>OK , maybe what I did was simple enough , but I just do n't see what 's so inherently hard about parellel programming .
Surely I am missing something.For me , the two things that are hardest are designing an efficient parallel algorithm for the target platform and ensuring fast but proper synchronization.For instance , if your target is a GPU , then you have a bunch of execution units in parallel , but communication between them is limited .
You have to take this into consideration when designing the algorithm.If your target is regular CPU 's , then you might have a handful for execution units and communication can be fast .
However you need to ensure proper synchronization .
Locks can be very expensive on some platforms , and so you want to reduce the use of them , especially in highly congested parts of your application .
However this can be very difficult to do correctly , so that there are n't any race conditions or data corruption.In general , using threads can be very easy however for all but the most trivial issues it can be a bit tricky to do it efficiently , so that it 's actually worthwhile to use threads .
Imho .</tokentext>
<sentencetext>OK, maybe what I did was simple enough, but I just don't see what's so inherently hard about parellel programming.
Surely I am missing something.For me, the two things that are hardest are designing an efficient parallel algorithm for the target platform and ensuring fast but proper synchronization.For instance, if your target is a GPU, then you have a bunch of execution units in parallel, but communication between them is limited.
You have to take this into consideration when designing the algorithm.If your target is regular CPU's, then you might have a handful for execution units and communication can be fast.
However you need to ensure proper synchronization.
Locks can be very expensive on some platforms, and so you want to reduce the use of them, especially in highly congested parts of your application.
However this can be very difficult to do correctly, so that there aren't any race conditions or data corruption.In general, using threads can be very easy however for all but the most trivial issues it can be a bit tricky to do it efficiently, so that it's actually worthwhile to use threads.
Imho.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245959</id>
	<title>Re:What's so hard?</title>
	<author>kevinNCSU</author>
	<datestamp>1244384280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>And add on to that the fact that you need to make sure the results of Thread A and B are deterministic meaning that the threads won't produce different results depending upon which thread is given cycles first. See <a href="http://en.wikipedia.org/wiki/Race\_condition" title="wikipedia.org" rel="nofollow">http://en.wikipedia.org/wiki/Race\_condition</a> [wikipedia.org]

I know that's where waiting and therefore deadlock problems come in but it is usually these race conditions that people overlook.</htmltext>
<tokenext>And add on to that the fact that you need to make sure the results of Thread A and B are deterministic meaning that the threads wo n't produce different results depending upon which thread is given cycles first .
See http : //en.wikipedia.org/wiki/Race \ _condition [ wikipedia.org ] I know that 's where waiting and therefore deadlock problems come in but it is usually these race conditions that people overlook .</tokentext>
<sentencetext>And add on to that the fact that you need to make sure the results of Thread A and B are deterministic meaning that the threads won't produce different results depending upon which thread is given cycles first.
See http://en.wikipedia.org/wiki/Race\_condition [wikipedia.org]

I know that's where waiting and therefore deadlock problems come in but it is usually these race conditions that people overlook.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243165</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28246253</id>
	<title>it's getting easier, not harder</title>
	<author>drfireman</author>
	<datestamp>1244387280000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Recent versions of gcc support OpenMP, and there's now experimental support for a multithreading library that I gather is going to be in the next c++ standard.  These don't solve everyone's problems, but certainly it's getting easier, not harder, to take better advantage of multi-processor multi-core systems.  I recently test retrofit some of my own code with OpenMP, and it was ridiculously easy.  Five years ago it would have been a much more irritating process.  I realize not everyone develops in c/c++, nor does everyone use a compiler that supports OpenMP.  But I doubt it's actually getting harder, probably just the rate at which it's getting easier is not the same for everyone.</p></htmltext>
<tokenext>Recent versions of gcc support OpenMP , and there 's now experimental support for a multithreading library that I gather is going to be in the next c + + standard .
These do n't solve everyone 's problems , but certainly it 's getting easier , not harder , to take better advantage of multi-processor multi-core systems .
I recently test retrofit some of my own code with OpenMP , and it was ridiculously easy .
Five years ago it would have been a much more irritating process .
I realize not everyone develops in c/c + + , nor does everyone use a compiler that supports OpenMP .
But I doubt it 's actually getting harder , probably just the rate at which it 's getting easier is not the same for everyone .</tokentext>
<sentencetext>Recent versions of gcc support OpenMP, and there's now experimental support for a multithreading library that I gather is going to be in the next c++ standard.
These don't solve everyone's problems, but certainly it's getting easier, not harder, to take better advantage of multi-processor multi-core systems.
I recently test retrofit some of my own code with OpenMP, and it was ridiculously easy.
Five years ago it would have been a much more irritating process.
I realize not everyone develops in c/c++, nor does everyone use a compiler that supports OpenMP.
But I doubt it's actually getting harder, probably just the rate at which it's getting easier is not the same for everyone.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28248433</id>
	<title>Re:Parallel is here to stay but not for every app</title>
	<author>kill the white man</author>
	<datestamp>1244454660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>right, and 640k ought to be enough for anyone.  Some classes of applications do not stand to benefit from parallelism, but there are so many other types of programs that beg for it.  Also, just because a system is typically under utilized doesn't mean that it shouldn't be able to perform when necessary.</p><p>
I would guess that about 50\% of the time, I'm running some CPU intensive computations that completely occupy at least a single core on my machine (usually neuroscience simulations).  Please, don't assume that your "average user" needs are the end all for everyone...
</p></htmltext>
<tokenext>right , and 640k ought to be enough for anyone .
Some classes of applications do not stand to benefit from parallelism , but there are so many other types of programs that beg for it .
Also , just because a system is typically under utilized does n't mean that it should n't be able to perform when necessary .
I would guess that about 50 \ % of the time , I 'm running some CPU intensive computations that completely occupy at least a single core on my machine ( usually neuroscience simulations ) .
Please , do n't assume that your " average user " needs are the end all for everyone.. .</tokentext>
<sentencetext>right, and 640k ought to be enough for anyone.
Some classes of applications do not stand to benefit from parallelism, but there are so many other types of programs that beg for it.
Also, just because a system is typically under utilized doesn't mean that it shouldn't be able to perform when necessary.
I would guess that about 50\% of the time, I'm running some CPU intensive computations that completely occupy at least a single core on my machine (usually neuroscience simulations).
Please, don't assume that your "average user" needs are the end all for everyone...
</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243603</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243181</id>
	<title>Re:What's so hard?</title>
	<author>Anonymous</author>
	<datestamp>1244404800000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>How many lines does it take you to parallelize this with pthreads in C?</p><p>for (i = 0; i &lt; 1000; i++)<br>
&nbsp; &nbsp; c[i] = a[i] + b[i];</p><p>If it takes you more than 2 lines, then your "language" is too hard to be used everywhere by everyone.</p></htmltext>
<tokenext>How many lines does it take you to parallelize this with pthreads in C ? for ( i = 0 ; i     c [ i ] = a [ i ] + b [ i ] ; If it takes you more than 2 lines , then your " language " is too hard to be used everywhere by everyone .</tokentext>
<sentencetext>How many lines does it take you to parallelize this with pthreads in C?for (i = 0; i 
    c[i] = a[i] + b[i];If it takes you more than 2 lines, then your "language" is too hard to be used everywhere by everyone.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243349</id>
	<title>Re:What's so hard?</title>
	<author>quanticle</author>
	<datestamp>1244406180000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Making a threaded application in C isn't difficult.  Testing and debugging said application is.  Given that threads share memory, rigorously testing buffer overflow conditions becomes doubly important.  In addition, adding threading introduces a whole new set of potential errors (such as race conditions, deadlocks, etc.) that need to be tested for.</p><p>Its easy enough to create a multi-threaded version of a program when its for personal use.  However, there are a number of issues that arise whenever a threaded program interacts with the (potentially malicious) outside world, and these issues are not trivial to test for or fix.  That's why I think that parallel programs are going to be increasingly written in functional programming languages (Common Lisp, Haskell, Scala, etc.).  The limitations on side effects that functional languages impose reduces the amount of interaction between threads, and reduces the probability that a failure in a single thread will propagate through the entire application.</p></htmltext>
<tokenext>Making a threaded application in C is n't difficult .
Testing and debugging said application is .
Given that threads share memory , rigorously testing buffer overflow conditions becomes doubly important .
In addition , adding threading introduces a whole new set of potential errors ( such as race conditions , deadlocks , etc .
) that need to be tested for.Its easy enough to create a multi-threaded version of a program when its for personal use .
However , there are a number of issues that arise whenever a threaded program interacts with the ( potentially malicious ) outside world , and these issues are not trivial to test for or fix .
That 's why I think that parallel programs are going to be increasingly written in functional programming languages ( Common Lisp , Haskell , Scala , etc. ) .
The limitations on side effects that functional languages impose reduces the amount of interaction between threads , and reduces the probability that a failure in a single thread will propagate through the entire application .</tokentext>
<sentencetext>Making a threaded application in C isn't difficult.
Testing and debugging said application is.
Given that threads share memory, rigorously testing buffer overflow conditions becomes doubly important.
In addition, adding threading introduces a whole new set of potential errors (such as race conditions, deadlocks, etc.
) that need to be tested for.Its easy enough to create a multi-threaded version of a program when its for personal use.
However, there are a number of issues that arise whenever a threaded program interacts with the (potentially malicious) outside world, and these issues are not trivial to test for or fix.
That's why I think that parallel programs are going to be increasingly written in functional programming languages (Common Lisp, Haskell, Scala, etc.).
The limitations on side effects that functional languages impose reduces the amount of interaction between threads, and reduces the probability that a failure in a single thread will propagate through the entire application.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28248269</id>
	<title>erlang?</title>
	<author>grimborg</author>
	<datestamp>1244452800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>imho you shouldn't be worrying about parallelizing things, the compiler/interpreter/whatever should take care of that. How about erlang for the job?</htmltext>
<tokenext>imho you should n't be worrying about parallelizing things , the compiler/interpreter/whatever should take care of that .
How about erlang for the job ?</tokentext>
<sentencetext>imho you shouldn't be worrying about parallelizing things, the compiler/interpreter/whatever should take care of that.
How about erlang for the job?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243371</id>
	<title>Re:What's so hard?</title>
	<author>Anonymous</author>
	<datestamp>1244406300000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><i>then your "language" is too hard to be used everywhere by everyone.</i></p><p>Perhaps not 'everyone' is meant to be a programmer...</p><p>Let the real men program in parallel</p></htmltext>
<tokenext>then your " language " is too hard to be used everywhere by everyone.Perhaps not 'everyone ' is meant to be a programmer...Let the real men program in parallel</tokentext>
<sentencetext>then your "language" is too hard to be used everywhere by everyone.Perhaps not 'everyone' is meant to be a programmer...Let the real men program in parallel</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243181</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28252917</id>
	<title>Re:I'm waiting for parallel libs for R</title>
	<author>sjames</author>
	<datestamp>1244484840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>even if i'm told that scripted languages won't have much of a future in parallel processing.</p></div><p>That might have been true when parallel processing was almost certainly a cluster. In those days, having a machine capable of parallel processing meant you were looking for the absolute maximum performance and the vast majority would be willing to jump through hoops to do it (including rewriting your app in a compiled language).</p><p>Now that even low cost desktop boxes are likely to have 4 or more cores, that's not necessarily true. There are plenty of people who would modify a program in a scripted language to gain significant benefit, but will not re-implement in C or FORTRAN. In other words, for increasing numbers of problems, the cost of extra parallel CPU cycles is now smaller than the man-hour cost for a re-implementation.</p><p>Meanwhile, the sharp performance divide between scripted languages and formally compiled ones is no longer there. With high level scripting languages compiling to bytecode and encouraging the use of well tested highly optimized algorithms for things like hashing and list handling rather than one-off implementations, plus the potential to use the CPU cache better, scripted programs can be FASTER than compiled sometimes. In the worst case, they're not as much slower as they used to be.</p></div>
	</htmltext>
<tokenext>even if i 'm told that scripted languages wo n't have much of a future in parallel processing.That might have been true when parallel processing was almost certainly a cluster .
In those days , having a machine capable of parallel processing meant you were looking for the absolute maximum performance and the vast majority would be willing to jump through hoops to do it ( including rewriting your app in a compiled language ) .Now that even low cost desktop boxes are likely to have 4 or more cores , that 's not necessarily true .
There are plenty of people who would modify a program in a scripted language to gain significant benefit , but will not re-implement in C or FORTRAN .
In other words , for increasing numbers of problems , the cost of extra parallel CPU cycles is now smaller than the man-hour cost for a re-implementation.Meanwhile , the sharp performance divide between scripted languages and formally compiled ones is no longer there .
With high level scripting languages compiling to bytecode and encouraging the use of well tested highly optimized algorithms for things like hashing and list handling rather than one-off implementations , plus the potential to use the CPU cache better , scripted programs can be FASTER than compiled sometimes .
In the worst case , they 're not as much slower as they used to be .</tokentext>
<sentencetext>even if i'm told that scripted languages won't have much of a future in parallel processing.That might have been true when parallel processing was almost certainly a cluster.
In those days, having a machine capable of parallel processing meant you were looking for the absolute maximum performance and the vast majority would be willing to jump through hoops to do it (including rewriting your app in a compiled language).Now that even low cost desktop boxes are likely to have 4 or more cores, that's not necessarily true.
There are plenty of people who would modify a program in a scripted language to gain significant benefit, but will not re-implement in C or FORTRAN.
In other words, for increasing numbers of problems, the cost of extra parallel CPU cycles is now smaller than the man-hour cost for a re-implementation.Meanwhile, the sharp performance divide between scripted languages and formally compiled ones is no longer there.
With high level scripting languages compiling to bytecode and encouraging the use of well tested highly optimized algorithms for things like hashing and list handling rather than one-off implementations, plus the potential to use the CPU cache better, scripted programs can be FASTER than compiled sometimes.
In the worst case, they're not as much slower as they used to be.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242943</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28251875</id>
	<title>Re:Parallel programming is dead. No one uses it...</title>
	<author>twopoint718</author>
	<datestamp>1244480400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Agreed. Looking at how some computing resources are set up, there is as much (maybe?) of an emphasis on high-throughput computing as on parallel computing ( <a href="http://en.wikipedia.org/wiki/High-Throughput\_Computing" title="wikipedia.org" rel="nofollow">http://en.wikipedia.org/wiki/High-Throughput\_Computing</a> [wikipedia.org] ). As in "how do we make all these long-running jobs complete reliably and so as to use the available hardware efficiently?"</htmltext>
<tokenext>Agreed .
Looking at how some computing resources are set up , there is as much ( maybe ?
) of an emphasis on high-throughput computing as on parallel computing ( http : //en.wikipedia.org/wiki/High-Throughput \ _Computing [ wikipedia.org ] ) .
As in " how do we make all these long-running jobs complete reliably and so as to use the available hardware efficiently ?
"</tokentext>
<sentencetext>Agreed.
Looking at how some computing resources are set up, there is as much (maybe?
) of an emphasis on high-throughput computing as on parallel computing ( http://en.wikipedia.org/wiki/High-Throughput\_Computing [wikipedia.org] ).
As in "how do we make all these long-running jobs complete reliably and so as to use the available hardware efficiently?
"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243175</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244623</id>
	<title>Re:Clojure</title>
	<author>shutdown -p now</author>
	<datestamp>1244373120000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>Check out Clojure [clojure.org]. The only programming language around that really addresses the issue of programming in a multi-core environment.</p></div><p>That's a rather bold statement. You do realize that those neat features of Clojure like STM or actors weren't originally invented for it? In fact, you could do most (all?) of that in Haskell before Clojure even appeared.</p><p>On a side note, while STM sounds great in theory for care-free concurrent programming, the performance penalty that comes with it in existing implementations is hefty. It's definitely a prospective area, but it needs more research before the results are consistently usable in production.</p></div>
	</htmltext>
<tokenext>Check out Clojure [ clojure.org ] .
The only programming language around that really addresses the issue of programming in a multi-core environment.That 's a rather bold statement .
You do realize that those neat features of Clojure like STM or actors were n't originally invented for it ?
In fact , you could do most ( all ?
) of that in Haskell before Clojure even appeared.On a side note , while STM sounds great in theory for care-free concurrent programming , the performance penalty that comes with it in existing implementations is hefty .
It 's definitely a prospective area , but it needs more research before the results are consistently usable in production .</tokentext>
<sentencetext>Check out Clojure [clojure.org].
The only programming language around that really addresses the issue of programming in a multi-core environment.That's a rather bold statement.
You do realize that those neat features of Clojure like STM or actors weren't originally invented for it?
In fact, you could do most (all?
) of that in Haskell before Clojure even appeared.On a side note, while STM sounds great in theory for care-free concurrent programming, the performance penalty that comes with it in existing implementations is hefty.
It's definitely a prospective area, but it needs more research before the results are consistently usable in production.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243173</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243129</id>
	<title>Awful example in the article</title>
	<author>Anonymous</author>
	<datestamp>1244404500000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>The example in the article is atrocious.</p><p>Why would you want the withdrawal and balance check to run concurrently?</p></htmltext>
<tokenext>The example in the article is atrocious.Why would you want the withdrawal and balance check to run concurrently ?</tokentext>
<sentencetext>The example in the article is atrocious.Why would you want the withdrawal and balance check to run concurrently?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243151</id>
	<title>The type of multithreaded design used is what</title>
	<author>Anonymous</author>
	<datestamp>1244404680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Look up the term "race condition" here -&gt; <a href="http://en.wikipedia.org/wiki/Race\_condition" title="wikipedia.org" rel="nofollow">http://en.wikipedia.org/wiki/Race\_condition</a> [wikipedia.org] , &amp; you'll get an idea of what the "problems inherent" are - I feel the same as you do though, as long as I keep 1 thread doing 1 task, &amp; another thread of execution doing another (albeit, on 2 diff./discrete sets of data, not working on the SAME set of data) - this is known as "coarse multithreading" (keeping multiple threads of execution off the same dataset) vs. "fine multithreading" (see here for more on that -&gt; <a href="http://www.cs.bu.edu/~best/courses/cs551/lectures/lecture-02.html" title="bu.edu" rel="nofollow">http://www.cs.bu.edu/~best/courses/cs551/lectures/lecture-02.html</a> [bu.edu] ) where the multiple threads of execution work on the same data involved.</p><p>(Bit of an "oversimplification" on my part possibly, but the broad strokes are there - the 'finer points' with examples are on those pages from the URL's above I posted)</p><p>APK</p></htmltext>
<tokenext>Look up the term " race condition " here - &gt; http : //en.wikipedia.org/wiki/Race \ _condition [ wikipedia.org ] , &amp; you 'll get an idea of what the " problems inherent " are - I feel the same as you do though , as long as I keep 1 thread doing 1 task , &amp; another thread of execution doing another ( albeit , on 2 diff./discrete sets of data , not working on the SAME set of data ) - this is known as " coarse multithreading " ( keeping multiple threads of execution off the same dataset ) vs. " fine multithreading " ( see here for more on that - &gt; http : //www.cs.bu.edu/ ~ best/courses/cs551/lectures/lecture-02.html [ bu.edu ] ) where the multiple threads of execution work on the same data involved .
( Bit of an " oversimplification " on my part possibly , but the broad strokes are there - the 'finer points ' with examples are on those pages from the URL 's above I posted ) APK</tokentext>
<sentencetext>Look up the term "race condition" here -&gt; http://en.wikipedia.org/wiki/Race\_condition [wikipedia.org] , &amp; you'll get an idea of what the "problems inherent" are - I feel the same as you do though, as long as I keep 1 thread doing 1 task, &amp; another thread of execution doing another (albeit, on 2 diff./discrete sets of data, not working on the SAME set of data) - this is known as "coarse multithreading" (keeping multiple threads of execution off the same dataset) vs. "fine multithreading" (see here for more on that -&gt; http://www.cs.bu.edu/~best/courses/cs551/lectures/lecture-02.html [bu.edu] ) where the multiple threads of execution work on the same data involved.
(Bit of an "oversimplification" on my part possibly, but the broad strokes are there - the 'finer points' with examples are on those pages from the URL's above I posted)APK</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28251239</id>
	<title>Re:What's so hard?</title>
	<author>jknapka</author>
	<datestamp>1244477100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>The truly difficult thing is to make parallelization completely automatic -- you write the code in a "natural" way, and the compiler and/or runtime environment figure out how to do the parallelization.  This was a major driving force behind the development of declarative and functional programming languages like Prolog and ML -- when your language doesn't permit side-effects or destructive assignment, then operations can be executed in any order you like, including simultaneously in many cases.  You just need to respect data-flow -- ensure that computations that depend on one another's results are done in the appropriate order.  In the absence of destructive assignment, data-flow is a lot easier to manage.  For example, Prolog queries can be parallelized automatically as long as certain language constructs are avoided, and the Parlog system achieved that to some extent.  This will never be possible with C, because C encourages you to use side effects that cannot be optimized away.</htmltext>
<tokenext>The truly difficult thing is to make parallelization completely automatic -- you write the code in a " natural " way , and the compiler and/or runtime environment figure out how to do the parallelization .
This was a major driving force behind the development of declarative and functional programming languages like Prolog and ML -- when your language does n't permit side-effects or destructive assignment , then operations can be executed in any order you like , including simultaneously in many cases .
You just need to respect data-flow -- ensure that computations that depend on one another 's results are done in the appropriate order .
In the absence of destructive assignment , data-flow is a lot easier to manage .
For example , Prolog queries can be parallelized automatically as long as certain language constructs are avoided , and the Parlog system achieved that to some extent .
This will never be possible with C , because C encourages you to use side effects that can not be optimized away .</tokentext>
<sentencetext>The truly difficult thing is to make parallelization completely automatic -- you write the code in a "natural" way, and the compiler and/or runtime environment figure out how to do the parallelization.
This was a major driving force behind the development of declarative and functional programming languages like Prolog and ML -- when your language doesn't permit side-effects or destructive assignment, then operations can be executed in any order you like, including simultaneously in many cases.
You just need to respect data-flow -- ensure that computations that depend on one another's results are done in the appropriate order.
In the absence of destructive assignment, data-flow is a lot easier to manage.
For example, Prolog queries can be parallelized automatically as long as certain language constructs are avoided, and the Parlog system achieved that to some extent.
This will never be possible with C, because C encourages you to use side effects that cannot be optimized away.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243165</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28246945</id>
	<title>Re:Old languages designed for parallel processing?</title>
	<author>Anonymous</author>
	<datestamp>1244394060000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>If I recall correctly, the Swedish telecom where Erlang was designed had one server running it with 7 continuous years uptime.</p></div><p>Certainly their ADX301 telephony-over-ATM switch has achieved 'nine nines' uptime - i.e. 99.9999999\% up, which allows for ~31ms downtime a year.</p><p>British Telecom use them to power their network and the passed the ultimate test (finals voting for Pop Star, the UK original for American Idol) with flying colours.</p><p>And erlang is a fun language to program.</p></div>
	</htmltext>
<tokenext>If I recall correctly , the Swedish telecom where Erlang was designed had one server running it with 7 continuous years uptime.Certainly their ADX301 telephony-over-ATM switch has achieved 'nine nines ' uptime - i.e .
99.9999999 \ % up , which allows for ~ 31ms downtime a year.British Telecom use them to power their network and the passed the ultimate test ( finals voting for Pop Star , the UK original for American Idol ) with flying colours.And erlang is a fun language to program .</tokentext>
<sentencetext>If I recall correctly, the Swedish telecom where Erlang was designed had one server running it with 7 continuous years uptime.Certainly their ADX301 telephony-over-ATM switch has achieved 'nine nines' uptime - i.e.
99.9999999\% up, which allows for ~31ms downtime a year.British Telecom use them to power their network and the passed the ultimate test (finals voting for Pop Star, the UK original for American Idol) with flying colours.And erlang is a fun language to program.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243457</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243483</id>
	<title>Re:Parallel is here to stay but not for every app</title>
	<author>Anonymous</author>
	<datestamp>1244407200000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Yeah like no computer will ever use more than 128k of ram.  Take your blinders off.  Multiple cores and large amounts of memory will allow desktop apps to do things we have never imagined and to do them in concert with the OS and other apps.</p><p>The OS and development tools weigh heavily on all of this which never seems to get mentioned.  With Snow Leopard and Grand Canyon Apple is far ahead.  Linux/Unix have the OS but not the tools.  Windows is screwed.</p></htmltext>
<tokenext>Yeah like no computer will ever use more than 128k of ram .
Take your blinders off .
Multiple cores and large amounts of memory will allow desktop apps to do things we have never imagined and to do them in concert with the OS and other apps.The OS and development tools weigh heavily on all of this which never seems to get mentioned .
With Snow Leopard and Grand Canyon Apple is far ahead .
Linux/Unix have the OS but not the tools .
Windows is screwed .</tokentext>
<sentencetext>Yeah like no computer will ever use more than 128k of ram.
Take your blinders off.
Multiple cores and large amounts of memory will allow desktop apps to do things we have never imagined and to do them in concert with the OS and other apps.The OS and development tools weigh heavily on all of this which never seems to get mentioned.
With Snow Leopard and Grand Canyon Apple is far ahead.
Linux/Unix have the OS but not the tools.
Windows is screwed.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28247711</id>
	<title>Why new language, while there are good'old ones...</title>
	<author>kafka.fr</author>
	<datestamp>1244403900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Can someone explain me <em>why</em> we would need a new language... There are languages designed from the ground-up to allow parallel processing. Yes, structures inside de core language, not some extra libs. Just to name one: Ada, designed to allow parallel computing since before 1979 (yes, that's <b>30</b> years ago). Reference: <a href="http://www.adaic.org/whyada/multicore.html" title="adaic.org" rel="nofollow">http://www.adaic.org/whyada/multicore.html</a> [adaic.org]. Even on a single processor, doing stuffs memory-bound (such as a sort on a huge amount of data) parallelized can show significant improvement.</htmltext>
<tokenext>Can someone explain me why we would need a new language... There are languages designed from the ground-up to allow parallel processing .
Yes , structures inside de core language , not some extra libs .
Just to name one : Ada , designed to allow parallel computing since before 1979 ( yes , that 's 30 years ago ) .
Reference : http : //www.adaic.org/whyada/multicore.html [ adaic.org ] .
Even on a single processor , doing stuffs memory-bound ( such as a sort on a huge amount of data ) parallelized can show significant improvement .</tokentext>
<sentencetext>Can someone explain me why we would need a new language... There are languages designed from the ground-up to allow parallel processing.
Yes, structures inside de core language, not some extra libs.
Just to name one: Ada, designed to allow parallel computing since before 1979 (yes, that's 30 years ago).
Reference: http://www.adaic.org/whyada/multicore.html [adaic.org].
Even on a single processor, doing stuffs memory-bound (such as a sort on a huge amount of data) parallelized can show significant improvement.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242955</id>
	<title>Chapel?</title>
	<author>Anonymous</author>
	<datestamp>1244403240000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext>Great, another bullshit proselytizing language written by some pios cancers of the planet. Probably written like some bitch like Larry Wall, who springs hardons at deveoping written versions of spoken languages just so he can cram the bible down poor savages' throats. <br> <br>

Fuck religion and the tards who practise it.</htmltext>
<tokenext>Great , another bullshit proselytizing language written by some pios cancers of the planet .
Probably written like some bitch like Larry Wall , who springs hardons at deveoping written versions of spoken languages just so he can cram the bible down poor savages ' throats .
Fuck religion and the tards who practise it .</tokentext>
<sentencetext>Great, another bullshit proselytizing language written by some pios cancers of the planet.
Probably written like some bitch like Larry Wall, who springs hardons at deveoping written versions of spoken languages just so he can cram the bible down poor savages' throats.
Fuck religion and the tards who practise it.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28250739</id>
	<title>Re:I'm waiting for parallel libs for R</title>
	<author>CarpetShark</author>
	<datestamp>1244474580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Which amounts to runnign software, not programming new software.</p></htmltext>
<tokenext>Which amounts to runnign software , not programming new software .</tokentext>
<sentencetext>Which amounts to runnign software, not programming new software.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245833</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28275815</id>
	<title>Re:Parallel is here to stay but not for every app</title>
	<author>Anonymous</author>
	<datestamp>1244573220000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>"or any database"<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Funny you should mention that.. SQL Server is crappy enough, I know someone fighting withit because they went to multi-core systems (with slightly slower cores compared ot their older single-core systems.)    Slow down!  SQL Server is *SINGLE THREADED*</p><p>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Anyway...the fact of the matter is, some jobs do naturally split up, but the general-purpose programming languages do NOTHING to help.  Fork-and-forget's the easiest way to take advantage of cores, anything tighter coupled a programming language designed for mutlprocessing helps a lot, but it's nonstandard too.</p></htmltext>
<tokenext>" or any database "           Funny you should mention that.. SQL Server is crappy enough , I know someone fighting withit because they went to multi-core systems ( with slightly slower cores compared ot their older single-core systems .
) Slow down !
SQL Server is * SINGLE THREADED *           Anyway...the fact of the matter is , some jobs do naturally split up , but the general-purpose programming languages do NOTHING to help .
Fork-and-forget 's the easiest way to take advantage of cores , anything tighter coupled a programming language designed for mutlprocessing helps a lot , but it 's nonstandard too .</tokentext>
<sentencetext>"or any database"
          Funny you should mention that.. SQL Server is crappy enough, I know someone fighting withit because they went to multi-core systems (with slightly slower cores compared ot their older single-core systems.
)    Slow down!
SQL Server is *SINGLE THREADED*
          Anyway...the fact of the matter is, some jobs do naturally split up, but the general-purpose programming languages do NOTHING to help.
Fork-and-forget's the easiest way to take advantage of cores, anything tighter coupled a programming language designed for mutlprocessing helps a lot, but it's nonstandard too.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243091</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245685</id>
	<title>Re:What's so hard?</title>
	<author>Anonymous</author>
	<datestamp>1244381640000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>There are several classes of parallel programming</p><p>The 'stupidly paralizable ones'.  These are large chunks of data where the results of one iteration does not really effect the next iteration.  So you can really fire up as many threads as you have processors and see a nice speedup.  Things like this are ray tracing, searching, sorting, etc</p><p>Some interdependence.  These are where each thread can run on its own.  Doing its own thing.  But sometimes you have to spin wait on some other thread to finish whatever it is doing.  This sort of class is usually wrapped around a blocking call such as waiting on data from a TCPIP port.  With some sort of parent thread waiting around for results or doing other things like drawing the screen.</p><p>Massive interdependence.  These are where there are global variables and at any time some thread can be using them.  So you need to lock around them.  You can end up with locking trains.  Where the parallel code actually just ends up acting like serial code.  In these cases it can actually be slower.  This is the class what people think of when they talk about 'languages that just do it for them'.  They are not quite sure what they are asking for and just want the 'compiler to do it for them'.  Which is noble and all but at this time does not work well unless well thought out.</p><p>The hard part comes from locking interdependence.  Such as thread 1 locks a then lock b.  Then thread 2 locks b then lock a.  In some cases you can end up with a deadlock (this case I would say it is assured at some point).<br>Weird memory issues.  For example in Win32 you really should not share handles between threads but it still works.  It will just suddenly start acting wonky.  Why because of the way memory is allocated for the thread.  Which is an implementation specific detail but each threading library or system has its own quirks and tribal knowledge rules.</p><p>Race conditions and memory inconstancy is why threading is hard.  With small simple programs it is pretty easy to keep straight in your head.  But as systems grow and time goes by it becomes hard to keep it all straight without introducing new inconsistencies into the state of the program.</p><p>Others have pointed out that communication is also an issue.  That is usually in the form of some sort of lock/semaphore/queue.  In some cases the communication between threads can overwhelm what you are doing and actually make things worse.</p><p>It all really at this point in time using the thing that holds your ears apart when using threads and watching out for places where things can become inconsistent.</p><p>Adding threading in is EASY.  Making sure it doesnt totally trash your program.  Now thats hard...</p></htmltext>
<tokenext>There are several classes of parallel programmingThe 'stupidly paralizable ones' .
These are large chunks of data where the results of one iteration does not really effect the next iteration .
So you can really fire up as many threads as you have processors and see a nice speedup .
Things like this are ray tracing , searching , sorting , etcSome interdependence .
These are where each thread can run on its own .
Doing its own thing .
But sometimes you have to spin wait on some other thread to finish whatever it is doing .
This sort of class is usually wrapped around a blocking call such as waiting on data from a TCPIP port .
With some sort of parent thread waiting around for results or doing other things like drawing the screen.Massive interdependence .
These are where there are global variables and at any time some thread can be using them .
So you need to lock around them .
You can end up with locking trains .
Where the parallel code actually just ends up acting like serial code .
In these cases it can actually be slower .
This is the class what people think of when they talk about 'languages that just do it for them' .
They are not quite sure what they are asking for and just want the 'compiler to do it for them' .
Which is noble and all but at this time does not work well unless well thought out.The hard part comes from locking interdependence .
Such as thread 1 locks a then lock b. Then thread 2 locks b then lock a. In some cases you can end up with a deadlock ( this case I would say it is assured at some point ) .Weird memory issues .
For example in Win32 you really should not share handles between threads but it still works .
It will just suddenly start acting wonky .
Why because of the way memory is allocated for the thread .
Which is an implementation specific detail but each threading library or system has its own quirks and tribal knowledge rules.Race conditions and memory inconstancy is why threading is hard .
With small simple programs it is pretty easy to keep straight in your head .
But as systems grow and time goes by it becomes hard to keep it all straight without introducing new inconsistencies into the state of the program.Others have pointed out that communication is also an issue .
That is usually in the form of some sort of lock/semaphore/queue .
In some cases the communication between threads can overwhelm what you are doing and actually make things worse.It all really at this point in time using the thing that holds your ears apart when using threads and watching out for places where things can become inconsistent.Adding threading in is EASY .
Making sure it doesnt totally trash your program .
Now thats hard.. .</tokentext>
<sentencetext>There are several classes of parallel programmingThe 'stupidly paralizable ones'.
These are large chunks of data where the results of one iteration does not really effect the next iteration.
So you can really fire up as many threads as you have processors and see a nice speedup.
Things like this are ray tracing, searching, sorting, etcSome interdependence.
These are where each thread can run on its own.
Doing its own thing.
But sometimes you have to spin wait on some other thread to finish whatever it is doing.
This sort of class is usually wrapped around a blocking call such as waiting on data from a TCPIP port.
With some sort of parent thread waiting around for results or doing other things like drawing the screen.Massive interdependence.
These are where there are global variables and at any time some thread can be using them.
So you need to lock around them.
You can end up with locking trains.
Where the parallel code actually just ends up acting like serial code.
In these cases it can actually be slower.
This is the class what people think of when they talk about 'languages that just do it for them'.
They are not quite sure what they are asking for and just want the 'compiler to do it for them'.
Which is noble and all but at this time does not work well unless well thought out.The hard part comes from locking interdependence.
Such as thread 1 locks a then lock b.  Then thread 2 locks b then lock a.  In some cases you can end up with a deadlock (this case I would say it is assured at some point).Weird memory issues.
For example in Win32 you really should not share handles between threads but it still works.
It will just suddenly start acting wonky.
Why because of the way memory is allocated for the thread.
Which is an implementation specific detail but each threading library or system has its own quirks and tribal knowledge rules.Race conditions and memory inconstancy is why threading is hard.
With small simple programs it is pretty easy to keep straight in your head.
But as systems grow and time goes by it becomes hard to keep it all straight without introducing new inconsistencies into the state of the program.Others have pointed out that communication is also an issue.
That is usually in the form of some sort of lock/semaphore/queue.
In some cases the communication between threads can overwhelm what you are doing and actually make things worse.It all really at this point in time using the thing that holds your ears apart when using threads and watching out for places where things can become inconsistent.Adding threading in is EASY.
Making sure it doesnt totally trash your program.
Now thats hard...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28249687</id>
	<title>who has problems with threads?</title>
	<author>Bouncelot</author>
	<datestamp>1244468460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>"Getting the most from multicore processors is becoming an increasingly difficult task for CODERS"
- Fixed.

Real programmers have no problems handling multiple threads/processes.</htmltext>
<tokenext>" Getting the most from multicore processors is becoming an increasingly difficult task for CODERS " - Fixed .
Real programmers have no problems handling multiple threads/processes .</tokentext>
<sentencetext>"Getting the most from multicore processors is becoming an increasingly difficult task for CODERS"
- Fixed.
Real programmers have no problems handling multiple threads/processes.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28249985</id>
	<title>Multitasking</title>
	<author>Anonymous</author>
	<datestamp>1244470380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Multitasking is a great idea. So great that even Windows does it<nobr> <wbr></nobr>:-) Now, what happens when you have one program doing some heavy stuff, and another waiting for user input? The one waiting for user input becomes annoyingly slow, to the point where people tend to give up on multitasking, and just let the heavy program run while they go outside to do something else.</p><p>Then some brilliant people decided to fix the problem. If you have two programs running, and two cores, the heavy program cannot slow down the other program (well, as long as at least one of them stays away from the hard drive). Great, now multitasking works at it is supposed to. You don't notice at all, that there is another program running in the background.</p><p>Now, some asshole decides that the one heavy program - the one that you finally don't even notice is running - should be able to use multiple cores. Once again, the heavy program will be using the same CPU as the one the user is waiting for. The system feels slow. But this time, more cores is not going to help. The heavy program is built not for 2 or for cores, but for perhaps 64 or even more cores. So no matter how how big a CPU he can afford, it will still slow him down.</p><p>Why do some people seem to hate when computers get fast enough to multitask smoothly?</p></htmltext>
<tokenext>Multitasking is a great idea .
So great that even Windows does it : - ) Now , what happens when you have one program doing some heavy stuff , and another waiting for user input ?
The one waiting for user input becomes annoyingly slow , to the point where people tend to give up on multitasking , and just let the heavy program run while they go outside to do something else.Then some brilliant people decided to fix the problem .
If you have two programs running , and two cores , the heavy program can not slow down the other program ( well , as long as at least one of them stays away from the hard drive ) .
Great , now multitasking works at it is supposed to .
You do n't notice at all , that there is another program running in the background.Now , some asshole decides that the one heavy program - the one that you finally do n't even notice is running - should be able to use multiple cores .
Once again , the heavy program will be using the same CPU as the one the user is waiting for .
The system feels slow .
But this time , more cores is not going to help .
The heavy program is built not for 2 or for cores , but for perhaps 64 or even more cores .
So no matter how how big a CPU he can afford , it will still slow him down.Why do some people seem to hate when computers get fast enough to multitask smoothly ?</tokentext>
<sentencetext>Multitasking is a great idea.
So great that even Windows does it :-) Now, what happens when you have one program doing some heavy stuff, and another waiting for user input?
The one waiting for user input becomes annoyingly slow, to the point where people tend to give up on multitasking, and just let the heavy program run while they go outside to do something else.Then some brilliant people decided to fix the problem.
If you have two programs running, and two cores, the heavy program cannot slow down the other program (well, as long as at least one of them stays away from the hard drive).
Great, now multitasking works at it is supposed to.
You don't notice at all, that there is another program running in the background.Now, some asshole decides that the one heavy program - the one that you finally don't even notice is running - should be able to use multiple cores.
Once again, the heavy program will be using the same CPU as the one the user is waiting for.
The system feels slow.
But this time, more cores is not going to help.
The heavy program is built not for 2 or for cores, but for perhaps 64 or even more cores.
So no matter how how big a CPU he can afford, it will still slow him down.Why do some people seem to hate when computers get fast enough to multitask smoothly?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244389</id>
	<title>Re:What's so hard?</title>
	<author>Anonymous</author>
	<datestamp>1244371680000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>Simple answer: synchronized shared data access.</p><p>Technically creating the threads is the easy bit. Making sure that the threads don't fight over shared mutable data is so hard that most people give up. Frinstance, "serializable" isolation is the only really guaranteed isolation level for database transactions under all circumstances, but it can turn your DB into a uniprocessor; most people avoid it. Java's Swing UI gives up on data locking, and just says "only my thread touches the UI" and provides API to book tasks on it.</p><p>There exist a lots of cases where threaded parallelism is easy to implement. But many cases are hard, with subtle pathological difficulties. Once you bundle a few of these cases into the same system, correctness becomes impossible to estimate. Add to that the fact that many of the bugs are intermittent, debugger-resistant (schroedingbugs!) and potentially fatal (data corruption, when you encourage liveness; deadlocks, when you vote for safety), and you have some seriously difficult problems.</p></htmltext>
<tokenext>Simple answer : synchronized shared data access.Technically creating the threads is the easy bit .
Making sure that the threads do n't fight over shared mutable data is so hard that most people give up .
Frinstance , " serializable " isolation is the only really guaranteed isolation level for database transactions under all circumstances , but it can turn your DB into a uniprocessor ; most people avoid it .
Java 's Swing UI gives up on data locking , and just says " only my thread touches the UI " and provides API to book tasks on it.There exist a lots of cases where threaded parallelism is easy to implement .
But many cases are hard , with subtle pathological difficulties .
Once you bundle a few of these cases into the same system , correctness becomes impossible to estimate .
Add to that the fact that many of the bugs are intermittent , debugger-resistant ( schroedingbugs !
) and potentially fatal ( data corruption , when you encourage liveness ; deadlocks , when you vote for safety ) , and you have some seriously difficult problems .</tokentext>
<sentencetext>Simple answer: synchronized shared data access.Technically creating the threads is the easy bit.
Making sure that the threads don't fight over shared mutable data is so hard that most people give up.
Frinstance, "serializable" isolation is the only really guaranteed isolation level for database transactions under all circumstances, but it can turn your DB into a uniprocessor; most people avoid it.
Java's Swing UI gives up on data locking, and just says "only my thread touches the UI" and provides API to book tasks on it.There exist a lots of cases where threaded parallelism is easy to implement.
But many cases are hard, with subtle pathological difficulties.
Once you bundle a few of these cases into the same system, correctness becomes impossible to estimate.
Add to that the fact that many of the bugs are intermittent, debugger-resistant (schroedingbugs!
) and potentially fatal (data corruption, when you encourage liveness; deadlocks, when you vote for safety), and you have some seriously difficult problems.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244103</id>
	<title>Re:Parallel is here to stay but not for every app</title>
	<author>Anonymous</author>
	<datestamp>1244369700000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>". . . will allow desktop apps to do things we have never imagined . .<nobr> <wbr></nobr>."
<br>
<br>
You just agreed with your opponent, AuMatar's, point:
    "Parallelization on the desktop is a solution is search of a problem . .<nobr> <wbr></nobr>."
<br>
<br>
Someday, maybe, hopefully, I wish I may, I wish I might . . .</htmltext>
<tokenext>" .
. .
will allow desktop apps to do things we have never imagined .
. .
" You just agreed with your opponent , AuMatar 's , point : " Parallelization on the desktop is a solution is search of a problem .
. .
" Someday , maybe , hopefully , I wish I may , I wish I might .
. .</tokentext>
<sentencetext>".
. .
will allow desktop apps to do things we have never imagined .
. .
"


You just agreed with your opponent, AuMatar's, point:
    "Parallelization on the desktop is a solution is search of a problem .
. .
"


Someday, maybe, hopefully, I wish I may, I wish I might .
. .</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243483</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28249271</id>
	<title>Re:What's so hard?</title>
	<author>Ihlosi</author>
	<datestamp>1244464560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><i>Not trying to troll or anything, but I'd always hear of how parallel programming is very complicated for programmers, </i> <p>

Yeah, yeah, any idiot can do parallel programming. The hard parts are a) finding ways to make parallel programming efficient and b) debugging such a program, since you have to deal with all kinds of concepts that never occur in a single-threaded program (reentrancy, deadlocks, locking mechanisms, etc).</p></htmltext>
<tokenext>Not trying to troll or anything , but I 'd always hear of how parallel programming is very complicated for programmers , Yeah , yeah , any idiot can do parallel programming .
The hard parts are a ) finding ways to make parallel programming efficient and b ) debugging such a program , since you have to deal with all kinds of concepts that never occur in a single-threaded program ( reentrancy , deadlocks , locking mechanisms , etc ) .</tokentext>
<sentencetext>Not trying to troll or anything, but I'd always hear of how parallel programming is very complicated for programmers,  

Yeah, yeah, any idiot can do parallel programming.
The hard parts are a) finding ways to make parallel programming efficient and b) debugging such a program, since you have to deal with all kinds of concepts that never occur in a single-threaded program (reentrancy, deadlocks, locking mechanisms, etc).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243657</id>
	<title>Re:Parallel programming is dead. No one uses it...</title>
	<author>Anonymous</author>
	<datestamp>1244365260000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Threading is parallelism, if it's not then what is?</p><p>Also, Team Fortress 2 has multiprocessor extensions.</p></htmltext>
<tokenext>Threading is parallelism , if it 's not then what is ? Also , Team Fortress 2 has multiprocessor extensions .</tokentext>
<sentencetext>Threading is parallelism, if it's not then what is?Also, Team Fortress 2 has multiprocessor extensions.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243023</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243507</id>
	<title>Re:Parallel programming is dead. No one uses it...</title>
	<author>MillionthMonkey</author>
	<datestamp>1244407320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>except scientists who use supercomputers</p></div><p>I don't really consider myself a scientist but I've had to write parallel systems twice now. It goes much better when you either don't need internode communication, or you're allowed to implement it that way (when nobody's around to complain they might have to configure something).
<br> <br>
Threading I agree is not the same thing... with a decent threading API available you can fan out a loop across all your CPU cores with a small paragraph of code, if the iterations are independent of one other. Setting up that independence (if you have to) is most of the work. But that's a trick you can also use in a UI to speed up animations etc.
<br> <br>
Of course there are standard libraries now to handle this stuff, so some of the fun is gone.</p></div>
	</htmltext>
<tokenext>except scientists who use supercomputersI do n't really consider myself a scientist but I 've had to write parallel systems twice now .
It goes much better when you either do n't need internode communication , or you 're allowed to implement it that way ( when nobody 's around to complain they might have to configure something ) .
Threading I agree is not the same thing... with a decent threading API available you can fan out a loop across all your CPU cores with a small paragraph of code , if the iterations are independent of one other .
Setting up that independence ( if you have to ) is most of the work .
But that 's a trick you can also use in a UI to speed up animations etc .
Of course there are standard libraries now to handle this stuff , so some of the fun is gone .</tokentext>
<sentencetext>except scientists who use supercomputersI don't really consider myself a scientist but I've had to write parallel systems twice now.
It goes much better when you either don't need internode communication, or you're allowed to implement it that way (when nobody's around to complain they might have to configure something).
Threading I agree is not the same thing... with a decent threading API available you can fan out a loop across all your CPU cores with a small paragraph of code, if the iterations are independent of one other.
Setting up that independence (if you have to) is most of the work.
But that's a trick you can also use in a UI to speed up animations etc.
Of course there are standard libraries now to handle this stuff, so some of the fun is gone.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243023</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243137</id>
	<title>Re:Parallel programming is dead. No one uses it...</title>
	<author>beelsebob</author>
	<datestamp>1244404560000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p><em>Threading i don't count as parallel processing for the desktop. I don't even hear of any games or applications built for parallel.</em></p><p>Uhhhhhhhhhhh? Yes, well done with that...</p></htmltext>
<tokenext>Threading i do n't count as parallel processing for the desktop .
I do n't even hear of any games or applications built for parallel.Uhhhhhhhhhhh ?
Yes , well done with that.. .</tokentext>
<sentencetext>Threading i don't count as parallel processing for the desktop.
I don't even hear of any games or applications built for parallel.Uhhhhhhhhhhh?
Yes, well done with that...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243023</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28247343</id>
	<title>Re:What's so hard?</title>
	<author>mrlibertarian</author>
	<datestamp>1244398980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Multi-threading is hard when your threads share memory with each other. At first, it seems easy enough, but as your program becomes more complicated, you'll have to use more mutexes. The more mutexes you have, the more likely it is that you will run into dead-lock. And even after your program is working, you'll be afraid to modify certain parts of it, because you know that one wrong move could lead to a race condition or a dead lock.<br> <br>So, what's the solution? In my opinion, whenever you're writing a complicated, multi-threaded program, you need to use the actor pattern. In other words, instead of sharing memory, have your threads (i.e. actors) send messages to each other. Suddenly, everything is pretty simple. You don't have to worry about dead-lock anymore. You don't have to worry that some other thread is going to mess with one of your structures. For me, the actor pattern was a life-saver.<br> <br>Sure, I still share memory here and there, but whenever the program threatens to get away from me, I use the actor pattern to drive away the complexity.</htmltext>
<tokenext>Multi-threading is hard when your threads share memory with each other .
At first , it seems easy enough , but as your program becomes more complicated , you 'll have to use more mutexes .
The more mutexes you have , the more likely it is that you will run into dead-lock .
And even after your program is working , you 'll be afraid to modify certain parts of it , because you know that one wrong move could lead to a race condition or a dead lock .
So , what 's the solution ?
In my opinion , whenever you 're writing a complicated , multi-threaded program , you need to use the actor pattern .
In other words , instead of sharing memory , have your threads ( i.e .
actors ) send messages to each other .
Suddenly , everything is pretty simple .
You do n't have to worry about dead-lock anymore .
You do n't have to worry that some other thread is going to mess with one of your structures .
For me , the actor pattern was a life-saver .
Sure , I still share memory here and there , but whenever the program threatens to get away from me , I use the actor pattern to drive away the complexity .</tokentext>
<sentencetext>Multi-threading is hard when your threads share memory with each other.
At first, it seems easy enough, but as your program becomes more complicated, you'll have to use more mutexes.
The more mutexes you have, the more likely it is that you will run into dead-lock.
And even after your program is working, you'll be afraid to modify certain parts of it, because you know that one wrong move could lead to a race condition or a dead lock.
So, what's the solution?
In my opinion, whenever you're writing a complicated, multi-threaded program, you need to use the actor pattern.
In other words, instead of sharing memory, have your threads (i.e.
actors) send messages to each other.
Suddenly, everything is pretty simple.
You don't have to worry about dead-lock anymore.
You don't have to worry that some other thread is going to mess with one of your structures.
For me, the actor pattern was a life-saver.
Sure, I still share memory here and there, but whenever the program threatens to get away from me, I use the actor pattern to drive away the complexity.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28250701</id>
	<title>The JVM</title>
	<author>smbell</author>
	<datestamp>1244474460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I'm sure I'll get a sure I'll get the Java is slow and it eats all my memories crap but here it goes.<br> <br>

The JVM (Java Virtual Machine) is one of the few platforms that have a well defined memory model (<a href="http://work.tinou.com/2008/08/the-java-memory-model-in-500-words.html" title="tinou.com" rel="nofollow">Short Description</a> [tinou.com] <a href="http://en.wikipedia.org/wiki/Java\_Memory\_Model" title="wikipedia.org" rel="nofollow">Wikipedia</a> [wikipedia.org])<br> <br>

The main problem in parallel programming is dealing with data across different threads, knowing when data written in one thread is visible from another thread, and efficiently communicating between threads.  The JVM platform can handle all of this in a deterministic manner, which is key.<br> <br>

Now i say JVM here because it's the platform, and not the Java language, that makes it all work.  Java the language (as of 1.5) has great concurrency support, but there are also other languages built with concurrency in mind from the get go like Clojure and Scala.<br> <br>

Plus it all works cross platform.</htmltext>
<tokenext>I 'm sure I 'll get a sure I 'll get the Java is slow and it eats all my memories crap but here it goes .
The JVM ( Java Virtual Machine ) is one of the few platforms that have a well defined memory model ( Short Description [ tinou.com ] Wikipedia [ wikipedia.org ] ) The main problem in parallel programming is dealing with data across different threads , knowing when data written in one thread is visible from another thread , and efficiently communicating between threads .
The JVM platform can handle all of this in a deterministic manner , which is key .
Now i say JVM here because it 's the platform , and not the Java language , that makes it all work .
Java the language ( as of 1.5 ) has great concurrency support , but there are also other languages built with concurrency in mind from the get go like Clojure and Scala .
Plus it all works cross platform .</tokentext>
<sentencetext>I'm sure I'll get a sure I'll get the Java is slow and it eats all my memories crap but here it goes.
The JVM (Java Virtual Machine) is one of the few platforms that have a well defined memory model (Short Description [tinou.com] Wikipedia [wikipedia.org]) 

The main problem in parallel programming is dealing with data across different threads, knowing when data written in one thread is visible from another thread, and efficiently communicating between threads.
The JVM platform can handle all of this in a deterministic manner, which is key.
Now i say JVM here because it's the platform, and not the Java language, that makes it all work.
Java the language (as of 1.5) has great concurrency support, but there are also other languages built with concurrency in mind from the get go like Clojure and Scala.
Plus it all works cross platform.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28299413</id>
	<title>Re:I'm waiting for parallel libs for R</title>
	<author>alexandre\_ganso</author>
	<datestamp>1244711700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><a href="http://www.rparallel.org/" title="rparallel.org" rel="nofollow">http://www.rparallel.org/</a> [rparallel.org]</p></htmltext>
<tokenext>http : //www.rparallel.org/ [ rparallel.org ]</tokentext>
<sentencetext>http://www.rparallel.org/ [rparallel.org]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242943</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243127</id>
	<title>Old languages designed for parallel processing?</title>
	<author>Anonymous</author>
	<datestamp>1244404500000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p> <b> <a href="http://erlang.org/" title="erlang.org">Erlang</a> [erlang.org] </b> is an older established language designed for parallel processing.</p><p>Erlang was first developed in 1986, making it about a decade older than Java or Ruby. It is younger than Perl or C, and just a tad older than Python. It is a mature language with a large support community, especially in industrial applications. It is time tested and proven.</p><p>It is also Open source and offers many options for commercial support. </p><p>Before anyone at DARPA thinks that they can design a better language for concurrent parallel programming then I think they should be forced to spend 1 year learning Ada, and a second year working in Ada. If they survive they will most likely be cured of the thought that the Defense department can design good programming languages</p></htmltext>
<tokenext>Erlang [ erlang.org ] is an older established language designed for parallel processing.Erlang was first developed in 1986 , making it about a decade older than Java or Ruby .
It is younger than Perl or C , and just a tad older than Python .
It is a mature language with a large support community , especially in industrial applications .
It is time tested and proven.It is also Open source and offers many options for commercial support .
Before anyone at DARPA thinks that they can design a better language for concurrent parallel programming then I think they should be forced to spend 1 year learning Ada , and a second year working in Ada .
If they survive they will most likely be cured of the thought that the Defense department can design good programming languages</tokentext>
<sentencetext>  Erlang [erlang.org]  is an older established language designed for parallel processing.Erlang was first developed in 1986, making it about a decade older than Java or Ruby.
It is younger than Perl or C, and just a tad older than Python.
It is a mature language with a large support community, especially in industrial applications.
It is time tested and proven.It is also Open source and offers many options for commercial support.
Before anyone at DARPA thinks that they can design a better language for concurrent parallel programming then I think they should be forced to spend 1 year learning Ada, and a second year working in Ada.
If they survive they will most likely be cured of the thought that the Defense department can design good programming languages</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28253499</id>
	<title>Re:Old languages designed for parallel processing?</title>
	<author>Robb</author>
	<datestamp>1244487960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I don't think the active/passive synchronisation provided by Ada is the solution for using multiple cores efficiently but since I keep seeing people reinvent the same solution in Java there must be something useful about it. I found Ada quite easy to learn as it is remarkably consistent although that is the result of it being designed by just a couple people working closely together.</htmltext>
<tokenext>I do n't think the active/passive synchronisation provided by Ada is the solution for using multiple cores efficiently but since I keep seeing people reinvent the same solution in Java there must be something useful about it .
I found Ada quite easy to learn as it is remarkably consistent although that is the result of it being designed by just a couple people working closely together .</tokentext>
<sentencetext>I don't think the active/passive synchronisation provided by Ada is the solution for using multiple cores efficiently but since I keep seeing people reinvent the same solution in Java there must be something useful about it.
I found Ada quite easy to learn as it is remarkably consistent although that is the result of it being designed by just a couple people working closely together.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243127</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243189</id>
	<title>A view based on history...</title>
	<author>Anonymous</author>
	<datestamp>1244404860000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>Rehash time...</p><p>Parallelism typically falls into two buckets: Data parallel and functional parallel. The first challenge for the general programming public is identifying what is what. The second challenge is synchronizing parallelism in as bug free way as possible while retaining the performance advantage of the parallelism.</p><p>Doing fine-grained parallelism - what the functional crowd is promising, is something that will take a *long* time to become mainstream (Other interesting examples are things like LLVM and K, but they tend to focus more on data parallel). Functional is too abstract for most people to deal with (yes, I understand it is easy for *you*).</p><p>Short term (i.e. ~5 years), the real benefit will be in threaded/parallel frameworks (my app logic can be serial, tasks that my app needs happen in the background).</p><p>Changing industry tool-chains to something entirely new takes many many years. What most likely will happen is transactional memory will make it into some level of hardware, enabling faster parallel constructs, a cool new language will pop up formalizing all of these features. Someone will tear that cool new language apart by removing the rigor and giving it C/C++ style syntax, then the industry will start using it</p></htmltext>
<tokenext>Rehash time...Parallelism typically falls into two buckets : Data parallel and functional parallel .
The first challenge for the general programming public is identifying what is what .
The second challenge is synchronizing parallelism in as bug free way as possible while retaining the performance advantage of the parallelism.Doing fine-grained parallelism - what the functional crowd is promising , is something that will take a * long * time to become mainstream ( Other interesting examples are things like LLVM and K , but they tend to focus more on data parallel ) .
Functional is too abstract for most people to deal with ( yes , I understand it is easy for * you * ) .Short term ( i.e .
~ 5 years ) , the real benefit will be in threaded/parallel frameworks ( my app logic can be serial , tasks that my app needs happen in the background ) .Changing industry tool-chains to something entirely new takes many many years .
What most likely will happen is transactional memory will make it into some level of hardware , enabling faster parallel constructs , a cool new language will pop up formalizing all of these features .
Someone will tear that cool new language apart by removing the rigor and giving it C/C + + style syntax , then the industry will start using it</tokentext>
<sentencetext>Rehash time...Parallelism typically falls into two buckets: Data parallel and functional parallel.
The first challenge for the general programming public is identifying what is what.
The second challenge is synchronizing parallelism in as bug free way as possible while retaining the performance advantage of the parallelism.Doing fine-grained parallelism - what the functional crowd is promising, is something that will take a *long* time to become mainstream (Other interesting examples are things like LLVM and K, but they tend to focus more on data parallel).
Functional is too abstract for most people to deal with (yes, I understand it is easy for *you*).Short term (i.e.
~5 years), the real benefit will be in threaded/parallel frameworks (my app logic can be serial, tasks that my app needs happen in the background).Changing industry tool-chains to something entirely new takes many many years.
What most likely will happen is transactional memory will make it into some level of hardware, enabling faster parallel constructs, a cool new language will pop up formalizing all of these features.
Someone will tear that cool new language apart by removing the rigor and giving it C/C++ style syntax, then the industry will start using it</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28251295</id>
	<title>Re:What's so hard?</title>
	<author>mkramer</author>
	<datestamp>1244477400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>In addition to the other commentaries:</p><p>Threading only applies to a subset of parallizable cases, on a subset of computing architectures.</p><p>You're not going to use threads to decompose a for loop doing simple non-dependent mathematic operations over a large vector.  That's why the ugly vector operations were added to C for Alti-Vec/MMX/SSE/etc.  It's another type of problem.</p><p>As long as software development is stuck in a cycle of having to invent new programming solutions for every new computing architecture, we're not going to see a whole lot of variance/advancement in computer architectures.</p><p>What is desperately being sought now is a much more generic way of indicating parralelism in code, that a compiler can then parralelize or no parallelize using any number of techniques, depending on the specific architecture it's compiling to.</p><p>People talk about fancy super-intelligent compilers, but that's not really what we're after now.  They key is "merely" being able to concisely indicate data relationships (foremost dependency, but other attributes as well), which will open the door for any number of hardware and software innovations.</p></htmltext>
<tokenext>In addition to the other commentaries : Threading only applies to a subset of parallizable cases , on a subset of computing architectures.You 're not going to use threads to decompose a for loop doing simple non-dependent mathematic operations over a large vector .
That 's why the ugly vector operations were added to C for Alti-Vec/MMX/SSE/etc .
It 's another type of problem.As long as software development is stuck in a cycle of having to invent new programming solutions for every new computing architecture , we 're not going to see a whole lot of variance/advancement in computer architectures.What is desperately being sought now is a much more generic way of indicating parralelism in code , that a compiler can then parralelize or no parallelize using any number of techniques , depending on the specific architecture it 's compiling to.People talk about fancy super-intelligent compilers , but that 's not really what we 're after now .
They key is " merely " being able to concisely indicate data relationships ( foremost dependency , but other attributes as well ) , which will open the door for any number of hardware and software innovations .</tokentext>
<sentencetext>In addition to the other commentaries:Threading only applies to a subset of parallizable cases, on a subset of computing architectures.You're not going to use threads to decompose a for loop doing simple non-dependent mathematic operations over a large vector.
That's why the ugly vector operations were added to C for Alti-Vec/MMX/SSE/etc.
It's another type of problem.As long as software development is stuck in a cycle of having to invent new programming solutions for every new computing architecture, we're not going to see a whole lot of variance/advancement in computer architectures.What is desperately being sought now is a much more generic way of indicating parralelism in code, that a compiler can then parralelize or no parallelize using any number of techniques, depending on the specific architecture it's compiling to.People talk about fancy super-intelligent compilers, but that's not really what we're after now.
They key is "merely" being able to concisely indicate data relationships (foremost dependency, but other attributes as well), which will open the door for any number of hardware and software innovations.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28252655</id>
	<title>Re:Parallel is here to stay but not for every app</title>
	<author>PitaBred</author>
	<datestamp>1244483700000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>What about that other 10\% of the day? It's mostly for gamers and developers, but multiple cores really does speed a lot of things up. And they're starting to be quite useful now that Joe User is getting into video and audio editing and so on. Those most certainly are CPU-limited applications, and they are pretty amenable to parallelism as well. Just because you only email, browse the web and use a word processor doesn't mean that's what everyone does.</htmltext>
<tokenext>What about that other 10 \ % of the day ?
It 's mostly for gamers and developers , but multiple cores really does speed a lot of things up .
And they 're starting to be quite useful now that Joe User is getting into video and audio editing and so on .
Those most certainly are CPU-limited applications , and they are pretty amenable to parallelism as well .
Just because you only email , browse the web and use a word processor does n't mean that 's what everyone does .</tokentext>
<sentencetext>What about that other 10\% of the day?
It's mostly for gamers and developers, but multiple cores really does speed a lot of things up.
And they're starting to be quite useful now that Joe User is getting into video and audio editing and so on.
Those most certainly are CPU-limited applications, and they are pretty amenable to parallelism as well.
Just because you only email, browse the web and use a word processor doesn't mean that's what everyone does.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243603</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28246031</id>
	<title>Re:Multi-threaded or Parallel?</title>
	<author>ceoyoyo</author>
	<datestamp>1244385060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Part of the problem is probably insisting on calling it "threaded" and "parallel" when you actually mean "embarrassingly parallel" and "more tightly coupled" or somesuch.</p><p>The use or non-use of threads in any of their forms has nothing to do with whether you're talking about a very easily parallelized task, such as serving many web pages at the same time, or a more difficult-to-parallelize task such as searching.</p></htmltext>
<tokenext>Part of the problem is probably insisting on calling it " threaded " and " parallel " when you actually mean " embarrassingly parallel " and " more tightly coupled " or somesuch.The use or non-use of threads in any of their forms has nothing to do with whether you 're talking about a very easily parallelized task , such as serving many web pages at the same time , or a more difficult-to-parallelize task such as searching .</tokentext>
<sentencetext>Part of the problem is probably insisting on calling it "threaded" and "parallel" when you actually mean "embarrassingly parallel" and "more tightly coupled" or somesuch.The use or non-use of threads in any of their forms has nothing to do with whether you're talking about a very easily parallelized task, such as serving many web pages at the same time, or a more difficult-to-parallelize task such as searching.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243923</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243379</id>
	<title>Re:Awful example in the article</title>
	<author>awol</author>
	<datestamp>1244406420000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p><div class="quote"><p>The example in the article is atrocious.</p><p>Why would you want the withdrawal and balance check to run concurrently?</p></div><p>Because I can do a whole lot of "local" withdrawal processing whilst my balance check is off checking the canonical source of balance information. If it's comes back OK then the work I have been doing in parallel is now commitable work and my transaction is done. Perhaps in no more time than either of the balance check or the withdrawal whichever is the longest. Whilst the balance check/withdrawal example may seem ridiculous. There are some very interesting applications of this kind of problem in securities (financial) trading systems where the canonical balances of different instruments would conveniently (and some times mandatorily) stored in different locations and some complex synthetic transactions require access to balances from more than one instrument in order to execute properly.</p><p>It seems to me that most of the interesting parallism problems relate to distributed systems and it is not just a question of N phase commit databases but rather a construct of "end to end" dependencies in your processing chain where the true source of data cannot be accessed from all the nodes in the cluster at the same time from a procedural perspective.</p><p>It is this fact that to me suggests that the answer to these issues is a radical change in language toward the functional or logical types of languages like haskel and prolog with erlang being a very interesting place on that path for right now.</p></div>
	</htmltext>
<tokenext>The example in the article is atrocious.Why would you want the withdrawal and balance check to run concurrently ? Because I can do a whole lot of " local " withdrawal processing whilst my balance check is off checking the canonical source of balance information .
If it 's comes back OK then the work I have been doing in parallel is now commitable work and my transaction is done .
Perhaps in no more time than either of the balance check or the withdrawal whichever is the longest .
Whilst the balance check/withdrawal example may seem ridiculous .
There are some very interesting applications of this kind of problem in securities ( financial ) trading systems where the canonical balances of different instruments would conveniently ( and some times mandatorily ) stored in different locations and some complex synthetic transactions require access to balances from more than one instrument in order to execute properly.It seems to me that most of the interesting parallism problems relate to distributed systems and it is not just a question of N phase commit databases but rather a construct of " end to end " dependencies in your processing chain where the true source of data can not be accessed from all the nodes in the cluster at the same time from a procedural perspective.It is this fact that to me suggests that the answer to these issues is a radical change in language toward the functional or logical types of languages like haskel and prolog with erlang being a very interesting place on that path for right now .</tokentext>
<sentencetext>The example in the article is atrocious.Why would you want the withdrawal and balance check to run concurrently?Because I can do a whole lot of "local" withdrawal processing whilst my balance check is off checking the canonical source of balance information.
If it's comes back OK then the work I have been doing in parallel is now commitable work and my transaction is done.
Perhaps in no more time than either of the balance check or the withdrawal whichever is the longest.
Whilst the balance check/withdrawal example may seem ridiculous.
There are some very interesting applications of this kind of problem in securities (financial) trading systems where the canonical balances of different instruments would conveniently (and some times mandatorily) stored in different locations and some complex synthetic transactions require access to balances from more than one instrument in order to execute properly.It seems to me that most of the interesting parallism problems relate to distributed systems and it is not just a question of N phase commit databases but rather a construct of "end to end" dependencies in your processing chain where the true source of data cannot be accessed from all the nodes in the cluster at the same time from a procedural perspective.It is this fact that to me suggests that the answer to these issues is a radical change in language toward the functional or logical types of languages like haskel and prolog with erlang being a very interesting place on that path for right now.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243129</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242943</id>
	<title>I'm waiting for parallel libs for R</title>
	<author>G3ckoG33k</author>
	<datestamp>1244403120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I'm waiting for parallel libs for R, even if i'm told that scripted languages won't have much of a future in parallel processing. All I can do is hope. Sigh.</p></htmltext>
<tokenext>I 'm waiting for parallel libs for R , even if i 'm told that scripted languages wo n't have much of a future in parallel processing .
All I can do is hope .
Sigh .</tokentext>
<sentencetext>I'm waiting for parallel libs for R, even if i'm told that scripted languages won't have much of a future in parallel processing.
All I can do is hope.
Sigh.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28252459</id>
	<title>Re:Parallel is here to stay but not for every app</title>
	<author>Keynan</author>
	<datestamp>1244483100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I agree in large part that your right, that the network and other forms of I/O are the biggest speed problem. However, you seem to be over looking a big portion of the desktop market: Gamers.</p><p>3D rendering can take advantage of massive parralization</p></htmltext>
<tokenext>I agree in large part that your right , that the network and other forms of I/O are the biggest speed problem .
However , you seem to be over looking a big portion of the desktop market : Gamers.3D rendering can take advantage of massive parralization</tokentext>
<sentencetext>I agree in large part that your right, that the network and other forms of I/O are the biggest speed problem.
However, you seem to be over looking a big portion of the desktop market: Gamers.3D rendering can take advantage of massive parralization</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243603</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245931</id>
	<title>Re:Parallel programming is dead. No one uses it...</title>
	<author>ceoyoyo</author>
	<datestamp>1244383980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>Threading i don't count as parallel processing for the desktop.</i></p><p>There are no vehicles that carry more than two people.  Anything with more than two wheels I don't count as a vehicle.</p></htmltext>
<tokenext>Threading i do n't count as parallel processing for the desktop.There are no vehicles that carry more than two people .
Anything with more than two wheels I do n't count as a vehicle .</tokentext>
<sentencetext>Threading i don't count as parallel processing for the desktop.There are no vehicles that carry more than two people.
Anything with more than two wheels I don't count as a vehicle.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243023</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243923</id>
	<title>Multi-threaded or Parallel?</title>
	<author>ipoverscsi</author>
	<datestamp>1244368260000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>I have not read the article (par for the course here) but I think there is probably some confusion among the commenters regarding the difference between multi-threading programs and parallel algorithms.  Database servers, asynchronous I/O, background tasks and web servers are all examples of multi-threaded applications, where each thread can run independently of every other thread with locks protecting access to shared objects.  This is different from (and probably simpler than) parallel programs.  Map-reduce is a great example of a parallel distributed algorithm, but it is only one parallel computing model: Multiple Instruction / Multiple Data (MIMD).  Single Instruction / Multiple Data (SIMD) algorithms implemented on super-computers like Cray (more of a vector machine, but it's close enough to SIMD) and MasPar systems require different and far more complex algorithms.   In addition, purpose-built supercomputers may have additional restrictions on their memory accesses, such as whether multiple CPUs can concurrently read or write from memory.  </p><p>Of course, the Cray and Maspar systems are purpose-built machines, and, much like special-build processors have fallen in performance to general purpose CPUs, Cray and Maspar systems have fallen into disuse and virtual obscurity; therefore, one might argue that SIMD-type systems and their associated algorithms should be discounted.  But, there is a large class of problems -- particularly sorting algorithms -- well suited to SIMD algorithms, so perhaps we shouldn't be so quick to dismiss them.</p><p>There is a book called <i>An Introduction to Parallel Algorithms</i> by Joseph JaJa (<a href="http://www.amazon.com/Introduction-Parallel-Algorithms-Joseph-JaJa/dp/0201548569" title="amazon.com">http://www.amazon.com/Introduction-Parallel-Algorithms-Joseph-JaJa/dp/0201548569</a> [amazon.com]) that shows some of the complexities of developing truly parallel algorithms.</p><p>(Disclaimer: I own a copy of that book but otherwise have no financial interests in it.)</p></htmltext>
<tokenext>I have not read the article ( par for the course here ) but I think there is probably some confusion among the commenters regarding the difference between multi-threading programs and parallel algorithms .
Database servers , asynchronous I/O , background tasks and web servers are all examples of multi-threaded applications , where each thread can run independently of every other thread with locks protecting access to shared objects .
This is different from ( and probably simpler than ) parallel programs .
Map-reduce is a great example of a parallel distributed algorithm , but it is only one parallel computing model : Multiple Instruction / Multiple Data ( MIMD ) .
Single Instruction / Multiple Data ( SIMD ) algorithms implemented on super-computers like Cray ( more of a vector machine , but it 's close enough to SIMD ) and MasPar systems require different and far more complex algorithms .
In addition , purpose-built supercomputers may have additional restrictions on their memory accesses , such as whether multiple CPUs can concurrently read or write from memory .
Of course , the Cray and Maspar systems are purpose-built machines , and , much like special-build processors have fallen in performance to general purpose CPUs , Cray and Maspar systems have fallen into disuse and virtual obscurity ; therefore , one might argue that SIMD-type systems and their associated algorithms should be discounted .
But , there is a large class of problems -- particularly sorting algorithms -- well suited to SIMD algorithms , so perhaps we should n't be so quick to dismiss them.There is a book called An Introduction to Parallel Algorithms by Joseph JaJa ( http : //www.amazon.com/Introduction-Parallel-Algorithms-Joseph-JaJa/dp/0201548569 [ amazon.com ] ) that shows some of the complexities of developing truly parallel algorithms .
( Disclaimer : I own a copy of that book but otherwise have no financial interests in it .
)</tokentext>
<sentencetext>I have not read the article (par for the course here) but I think there is probably some confusion among the commenters regarding the difference between multi-threading programs and parallel algorithms.
Database servers, asynchronous I/O, background tasks and web servers are all examples of multi-threaded applications, where each thread can run independently of every other thread with locks protecting access to shared objects.
This is different from (and probably simpler than) parallel programs.
Map-reduce is a great example of a parallel distributed algorithm, but it is only one parallel computing model: Multiple Instruction / Multiple Data (MIMD).
Single Instruction / Multiple Data (SIMD) algorithms implemented on super-computers like Cray (more of a vector machine, but it's close enough to SIMD) and MasPar systems require different and far more complex algorithms.
In addition, purpose-built supercomputers may have additional restrictions on their memory accesses, such as whether multiple CPUs can concurrently read or write from memory.
Of course, the Cray and Maspar systems are purpose-built machines, and, much like special-build processors have fallen in performance to general purpose CPUs, Cray and Maspar systems have fallen into disuse and virtual obscurity; therefore, one might argue that SIMD-type systems and their associated algorithms should be discounted.
But, there is a large class of problems -- particularly sorting algorithms -- well suited to SIMD algorithms, so perhaps we shouldn't be so quick to dismiss them.There is a book called An Introduction to Parallel Algorithms by Joseph JaJa (http://www.amazon.com/Introduction-Parallel-Algorithms-Joseph-JaJa/dp/0201548569 [amazon.com]) that shows some of the complexities of developing truly parallel algorithms.
(Disclaimer: I own a copy of that book but otherwise have no financial interests in it.
)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243701</id>
	<title>Re:What's so hard?</title>
	<author>Anonymous</author>
	<datestamp>1244365800000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Assuming that you want to add everything:<br>(pmap + a b)</p><p>Or just the first 1000 items:<br>(pmap + (take 1000 a) (take 1000 b))<nobr> <wbr></nobr>/Clojure ftw.<nobr> <wbr></nobr>:p</p></htmltext>
<tokenext>Assuming that you want to add everything : ( pmap + a b ) Or just the first 1000 items : ( pmap + ( take 1000 a ) ( take 1000 b ) ) /Clojure ftw .
: p</tokentext>
<sentencetext>Assuming that you want to add everything:(pmap + a b)Or just the first 1000 items:(pmap + (take 1000 a) (take 1000 b)) /Clojure ftw.
:p</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243181</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243369</id>
	<title>Re:What's so hard?</title>
	<author>Yacoby</author>
	<datestamp>1244406300000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>Data communication in a foolproof way. Writing a threaded program is easy if the program is simple. You can even get a bit more performance out of a program using multiple threads if you use locking. If you use locking, you end up with the possibility of race conditions, deadlock and other nightmares. <br> <br>


Extending this to something like a game engine is much harder. Say we split our physics and rendering into two threads. How does the physics thread update the render thread? We could just lock the whole scene graph, but then we don't get much of a performance increase, if at all. We then could use two buffers. The renderer renders the data from one, and the physics thread updates the other. When we are ready to update the frame, we just swap the buffers. Then we end up with some input lag. There are still complications. What happens if we add an AI thread. How does that add data to the buffer in a way that doesn't conflict with the physics thread? <br> <br>


We could use lock free lists, which are very hard to get right. Even some implementations that I have seen end up locking the heap, which we want to avoid. But even then we end up with some issues.<br>

Don't get me started on debugging threaded applications. Finding that while it works fine on one and two cores. 0.1\% of the time on a quad core there is a deadlock.
<br> <br>
So to sum it up. Anyone can write a threaded application where it is easy to split the tasks. If you are designing it from the ground up, it is even easier. If you need to write performance critical maintainable code that involves a lot of communication, it suddenly gets much harder.</htmltext>
<tokenext>Data communication in a foolproof way .
Writing a threaded program is easy if the program is simple .
You can even get a bit more performance out of a program using multiple threads if you use locking .
If you use locking , you end up with the possibility of race conditions , deadlock and other nightmares .
Extending this to something like a game engine is much harder .
Say we split our physics and rendering into two threads .
How does the physics thread update the render thread ?
We could just lock the whole scene graph , but then we do n't get much of a performance increase , if at all .
We then could use two buffers .
The renderer renders the data from one , and the physics thread updates the other .
When we are ready to update the frame , we just swap the buffers .
Then we end up with some input lag .
There are still complications .
What happens if we add an AI thread .
How does that add data to the buffer in a way that does n't conflict with the physics thread ?
We could use lock free lists , which are very hard to get right .
Even some implementations that I have seen end up locking the heap , which we want to avoid .
But even then we end up with some issues .
Do n't get me started on debugging threaded applications .
Finding that while it works fine on one and two cores .
0.1 \ % of the time on a quad core there is a deadlock .
So to sum it up .
Anyone can write a threaded application where it is easy to split the tasks .
If you are designing it from the ground up , it is even easier .
If you need to write performance critical maintainable code that involves a lot of communication , it suddenly gets much harder .</tokentext>
<sentencetext>Data communication in a foolproof way.
Writing a threaded program is easy if the program is simple.
You can even get a bit more performance out of a program using multiple threads if you use locking.
If you use locking, you end up with the possibility of race conditions, deadlock and other nightmares.
Extending this to something like a game engine is much harder.
Say we split our physics and rendering into two threads.
How does the physics thread update the render thread?
We could just lock the whole scene graph, but then we don't get much of a performance increase, if at all.
We then could use two buffers.
The renderer renders the data from one, and the physics thread updates the other.
When we are ready to update the frame, we just swap the buffers.
Then we end up with some input lag.
There are still complications.
What happens if we add an AI thread.
How does that add data to the buffer in a way that doesn't conflict with the physics thread?
We could use lock free lists, which are very hard to get right.
Even some implementations that I have seen end up locking the heap, which we want to avoid.
But even then we end up with some issues.
Don't get me started on debugging threaded applications.
Finding that while it works fine on one and two cores.
0.1\% of the time on a quad core there is a deadlock.
So to sum it up.
Anyone can write a threaded application where it is easy to split the tasks.
If you are designing it from the ground up, it is even easier.
If you need to write performance critical maintainable code that involves a lot of communication, it suddenly gets much harder.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243355</id>
	<title>Re:What's so hard?</title>
	<author>Anonymous</author>
	<datestamp>1244406180000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>That's what libraries are for.  This is what you youngins continually forget.  Design and implement properly libraries of useful code ONCE and use them many times.</p><p>You could easily write a "vector\_add" function which spawns (or unlocks pre-spawned which is smarter) threads to perform a variety of tasks.  Then from your application a single line of code would perform an optimized parallel vector addition or whatever.</p><p>In fact, a smart DSP lib today would do just that.  Pre-spawn a bunch of threads then host a job server which unlocks threads to work on given tasks.  That way you have single line functions like vector\_add(in1, in2, out, size), etc...</p><p>So you could actualy write really easy to read parallel programs in C.  You just have to know the first thing about software development.</p><p>In short, your rant is a product of not knowing what you are doing.</p></htmltext>
<tokenext>That 's what libraries are for .
This is what you youngins continually forget .
Design and implement properly libraries of useful code ONCE and use them many times.You could easily write a " vector \ _add " function which spawns ( or unlocks pre-spawned which is smarter ) threads to perform a variety of tasks .
Then from your application a single line of code would perform an optimized parallel vector addition or whatever.In fact , a smart DSP lib today would do just that .
Pre-spawn a bunch of threads then host a job server which unlocks threads to work on given tasks .
That way you have single line functions like vector \ _add ( in1 , in2 , out , size ) , etc...So you could actualy write really easy to read parallel programs in C. You just have to know the first thing about software development.In short , your rant is a product of not knowing what you are doing .</tokentext>
<sentencetext>That's what libraries are for.
This is what you youngins continually forget.
Design and implement properly libraries of useful code ONCE and use them many times.You could easily write a "vector\_add" function which spawns (or unlocks pre-spawned which is smarter) threads to perform a variety of tasks.
Then from your application a single line of code would perform an optimized parallel vector addition or whatever.In fact, a smart DSP lib today would do just that.
Pre-spawn a bunch of threads then host a job server which unlocks threads to work on given tasks.
That way you have single line functions like vector\_add(in1, in2, out, size), etc...So you could actualy write really easy to read parallel programs in C.  You just have to know the first thing about software development.In short, your rant is a product of not knowing what you are doing.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243181</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243533</id>
	<title>Re:What's so hard?</title>
	<author>johannesg</author>
	<datestamp>1244407560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Not trying to troll or anything, but I'd always hear of how parallel programming is very complicated for programmers, but then I learnt to use pthread in C to parallelise everything in my C program from parallel concurrent processing of the same things to threading any aspect of the program, and I was surprised by how simple and straightforward it was using pthread, <b>even creating a number of threads depending on the number of detected cores was simple</b>.</p></div><p>Really? With the pthread API? Pray tell, how does that work?</p><p>Note that reading from<nobr> <wbr></nobr>/proc/ is neither part of the pthread API, nor portable...</p></div>
	</htmltext>
<tokenext>Not trying to troll or anything , but I 'd always hear of how parallel programming is very complicated for programmers , but then I learnt to use pthread in C to parallelise everything in my C program from parallel concurrent processing of the same things to threading any aspect of the program , and I was surprised by how simple and straightforward it was using pthread , even creating a number of threads depending on the number of detected cores was simple.Really ?
With the pthread API ?
Pray tell , how does that work ? Note that reading from /proc/ is neither part of the pthread API , nor portable.. .</tokentext>
<sentencetext>Not trying to troll or anything, but I'd always hear of how parallel programming is very complicated for programmers, but then I learnt to use pthread in C to parallelise everything in my C program from parallel concurrent processing of the same things to threading any aspect of the program, and I was surprised by how simple and straightforward it was using pthread, even creating a number of threads depending on the number of detected cores was simple.Really?
With the pthread API?
Pray tell, how does that work?Note that reading from /proc/ is neither part of the pthread API, nor portable...
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245833</id>
	<title>Re:I'm waiting for parallel libs for R</title>
	<author>ceoyoyo</author>
	<datestamp>1244383020000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>Whoever told you that is mistaken.</p><p>The <i>easiest</i> way to take advantage of a multiprocessing environment is to use techniques that will be familiar to any high level programmer.  For example, you don't write for loops, you call functions written in a low level language to do things like that for you.  Those low level functions can be easily parallelized, giving all your code a boost.</p></htmltext>
<tokenext>Whoever told you that is mistaken.The easiest way to take advantage of a multiprocessing environment is to use techniques that will be familiar to any high level programmer .
For example , you do n't write for loops , you call functions written in a low level language to do things like that for you .
Those low level functions can be easily parallelized , giving all your code a boost .</tokentext>
<sentencetext>Whoever told you that is mistaken.The easiest way to take advantage of a multiprocessing environment is to use techniques that will be familiar to any high level programmer.
For example, you don't write for loops, you call functions written in a low level language to do things like that for you.
Those low level functions can be easily parallelized, giving all your code a boost.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242943</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243359</id>
	<title>Re:What's so hard?</title>
	<author>Anonymous</author>
	<datestamp>1244406180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>even creating a number of threads depending on the number of detected cores was simple.</p></div><p>Are you guaranteed that those spawned threads will be evenly distributed amongst the cores, on a given architecture?  There's also a matter of locality;  you want the threads that are dealing with certain data to run on cores that are close to that data.
<br>
<br>
MT is not the same thing as MP.  You may have written a multi-threaded app, but when on a single-core you likely didn't see any perf gains.   MT apps on a single CPU core can have benefits-- such as, your UI can remain responsive to the user during serious number crunching-- but at 100\% CPU load, this necessarily comes with the cost of your number-crunching taking longer.
<br>
<br>
MP scalability in software is hard, because you don't know (and shouldn't assume) how many CPU cores are present in the user's system.  So, you have to give considerations to:<br>
   * What aspects of your software's workload is independent, and parallelizable<br>
   * How coarsely or finely you should parallelize the work (as a runtime decision,) based on the number of CPU cores present.
<br>
<br>
It's also hard because you have to forgo simplicity that you could have had with a single-threaded implementation, even when there is only one CPU core in the user's system.</p></div>
	</htmltext>
<tokenext>even creating a number of threads depending on the number of detected cores was simple.Are you guaranteed that those spawned threads will be evenly distributed amongst the cores , on a given architecture ?
There 's also a matter of locality ; you want the threads that are dealing with certain data to run on cores that are close to that data .
MT is not the same thing as MP .
You may have written a multi-threaded app , but when on a single-core you likely did n't see any perf gains .
MT apps on a single CPU core can have benefits-- such as , your UI can remain responsive to the user during serious number crunching-- but at 100 \ % CPU load , this necessarily comes with the cost of your number-crunching taking longer .
MP scalability in software is hard , because you do n't know ( and should n't assume ) how many CPU cores are present in the user 's system .
So , you have to give considerations to : * What aspects of your software 's workload is independent , and parallelizable * How coarsely or finely you should parallelize the work ( as a runtime decision , ) based on the number of CPU cores present .
It 's also hard because you have to forgo simplicity that you could have had with a single-threaded implementation , even when there is only one CPU core in the user 's system .</tokentext>
<sentencetext>even creating a number of threads depending on the number of detected cores was simple.Are you guaranteed that those spawned threads will be evenly distributed amongst the cores, on a given architecture?
There's also a matter of locality;  you want the threads that are dealing with certain data to run on cores that are close to that data.
MT is not the same thing as MP.
You may have written a multi-threaded app, but when on a single-core you likely didn't see any perf gains.
MT apps on a single CPU core can have benefits-- such as, your UI can remain responsive to the user during serious number crunching-- but at 100\% CPU load, this necessarily comes with the cost of your number-crunching taking longer.
MP scalability in software is hard, because you don't know (and shouldn't assume) how many CPU cores are present in the user's system.
So, you have to give considerations to:
   * What aspects of your software's workload is independent, and parallelizable
   * How coarsely or finely you should parallelize the work (as a runtime decision,) based on the number of CPU cores present.
It's also hard because you have to forgo simplicity that you could have had with a single-threaded implementation, even when there is only one CPU core in the user's system.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28251871</id>
	<title>Re:Parallel is here to stay but not for every app</title>
	<author>DragonWriter</author>
	<datestamp>1244480340000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>And how many of those cores are above 2\% utilization for 90\% of the day? Parallelization on the desktop is a solution is search of a problem-</p><p>The indication of the need to parallelization isn't the number of cores that are above 2\% utilization for 90\% of the day, but the number that are above 90\% utilization for any part of the day.</p><blockquote><div><p>My email, web browsing, word processor, etc aren't cpu limited.</p></div> </blockquote><p>Some of use our computers for more than that.</p></div>
	</htmltext>
<tokenext>And how many of those cores are above 2 \ % utilization for 90 \ % of the day ?
Parallelization on the desktop is a solution is search of a problem-The indication of the need to parallelization is n't the number of cores that are above 2 \ % utilization for 90 \ % of the day , but the number that are above 90 \ % utilization for any part of the day.My email , web browsing , word processor , etc are n't cpu limited .
Some of use our computers for more than that .</tokentext>
<sentencetext>And how many of those cores are above 2\% utilization for 90\% of the day?
Parallelization on the desktop is a solution is search of a problem-The indication of the need to parallelization isn't the number of cores that are above 2\% utilization for 90\% of the day, but the number that are above 90\% utilization for any part of the day.My email, web browsing, word processor, etc aren't cpu limited.
Some of use our computers for more than that.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243603</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243093</id>
	<title>GCC OpenMP</title>
	<author>Anonymous</author>
	<datestamp>1244404320000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Agree.  Adding a new language isn't going to help much.  Maybe extending an old one is the best way to go.  However, GCC has added (if you choose) a small subset of parallel programming constructs automatically at the compiler level for at least a year now.</p></htmltext>
<tokenext>Agree .
Adding a new language is n't going to help much .
Maybe extending an old one is the best way to go .
However , GCC has added ( if you choose ) a small subset of parallel programming constructs automatically at the compiler level for at least a year now .</tokentext>
<sentencetext>Agree.
Adding a new language isn't going to help much.
Maybe extending an old one is the best way to go.
However, GCC has added (if you choose) a small subset of parallel programming constructs automatically at the compiler level for at least a year now.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243111</id>
	<title>Re:Parallel is here to stay but not for every app</title>
	<author>Daniel Dvorkin</author>
	<datestamp>1244404380000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>True enough, but the class of applications for which parallel processing <b>is</b> useful is growing rapidly as programmers learn to think in those terms.  Any program with a "for" or "while" loop in which the results of one iteration do not depend on the results of the previous iteration, as well as a fair number of such loops in which the results do have such a dependency, is a candidate for parallelization -- and that means most of the programs which most programmers will ever write.  We just need the languages not to make coding this way too painful.</p></htmltext>
<tokenext>True enough , but the class of applications for which parallel processing is useful is growing rapidly as programmers learn to think in those terms .
Any program with a " for " or " while " loop in which the results of one iteration do not depend on the results of the previous iteration , as well as a fair number of such loops in which the results do have such a dependency , is a candidate for parallelization -- and that means most of the programs which most programmers will ever write .
We just need the languages not to make coding this way too painful .</tokentext>
<sentencetext>True enough, but the class of applications for which parallel processing is useful is growing rapidly as programmers learn to think in those terms.
Any program with a "for" or "while" loop in which the results of one iteration do not depend on the results of the previous iteration, as well as a fair number of such loops in which the results do have such a dependency, is a candidate for parallelization -- and that means most of the programs which most programmers will ever write.
We just need the languages not to make coding this way too painful.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244479</id>
	<title>Re:What's so hard?</title>
	<author>4D6963</author>
	<datestamp>1244372100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Who said I used nothing but the pthread library, or nothing but standard calls? If you want to write a portable program and you want to write more than a command line tool best believe you'll use platform dependent #ifdefs no matter what.</p><p>Like it matters, it's just one call to make at the beginning of the program.</p></htmltext>
<tokenext>Who said I used nothing but the pthread library , or nothing but standard calls ?
If you want to write a portable program and you want to write more than a command line tool best believe you 'll use platform dependent # ifdefs no matter what.Like it matters , it 's just one call to make at the beginning of the program .</tokentext>
<sentencetext>Who said I used nothing but the pthread library, or nothing but standard calls?
If you want to write a portable program and you want to write more than a command line tool best believe you'll use platform dependent #ifdefs no matter what.Like it matters, it's just one call to make at the beginning of the program.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243533</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28254753</id>
	<title>COM to the rescue</title>
	<author>Anonymous</author>
	<datestamp>1244493120000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The easiest way I've found to take advantage of all the user's cores:</p><p>Build COM server exes.  Automatically call UnregisterClassObjects() after the first object is created, so the next attempt to create an object creates a new process.  Now I have:<br>1.  Multiple processes running in parallel on multiple cores.<br>2.  An easy programming API (the COM interface).<br>3.  No unintended memory sharing, so very little chance of race conditions.<br>4.  A very testable surface.  I can write test code in VB that exercise the COM objects.<br>5.  Ability to write different components in different languages.</p><p>Nothing like 1990's technology no solve today's problems.</p></htmltext>
<tokenext>The easiest way I 've found to take advantage of all the user 's cores : Build COM server exes .
Automatically call UnregisterClassObjects ( ) after the first object is created , so the next attempt to create an object creates a new process .
Now I have : 1 .
Multiple processes running in parallel on multiple cores.2 .
An easy programming API ( the COM interface ) .3 .
No unintended memory sharing , so very little chance of race conditions.4 .
A very testable surface .
I can write test code in VB that exercise the COM objects.5 .
Ability to write different components in different languages.Nothing like 1990 's technology no solve today 's problems .</tokentext>
<sentencetext>The easiest way I've found to take advantage of all the user's cores:Build COM server exes.
Automatically call UnregisterClassObjects() after the first object is created, so the next attempt to create an object creates a new process.
Now I have:1.
Multiple processes running in parallel on multiple cores.2.
An easy programming API (the COM interface).3.
No unintended memory sharing, so very little chance of race conditions.4.
A very testable surface.
I can write test code in VB that exercise the COM objects.5.
Ability to write different components in different languages.Nothing like 1990's technology no solve today's problems.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28246123</id>
	<title>Re:I'm waiting for parallel libs for R</title>
	<author>gringer</author>
	<datestamp>1244385840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I've used <a href="http://www.sfu.ca/~sblay/R/snow.html" title="www.sfu.ca">snow</a> [www.sfu.ca] a few times. Seems to work quite well for the stuff I'm doing, especially when I'm working on a computer with MPI.</p></htmltext>
<tokenext>I 've used snow [ www.sfu.ca ] a few times .
Seems to work quite well for the stuff I 'm doing , especially when I 'm working on a computer with MPI .</tokentext>
<sentencetext>I've used snow [www.sfu.ca] a few times.
Seems to work quite well for the stuff I'm doing, especially when I'm working on a computer with MPI.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242943</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28266013</id>
	<title>Re:Old languages designed for parallel processing?</title>
	<author>Anonymous</author>
	<datestamp>1244563740000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Erlang is for concurrency, not parallelism.</p><p>The former is about things genuinely happening at the same time (such as server/client communication, different parts of a UI etc.), the latter is for speeding up a non-concurrent operation by running it on multiple threads. Erlang really offers nothing especially nice for doing that. Threads are usually the wrong abstraction for parallelism, for one thing.</p></htmltext>
<tokenext>Erlang is for concurrency , not parallelism.The former is about things genuinely happening at the same time ( such as server/client communication , different parts of a UI etc .
) , the latter is for speeding up a non-concurrent operation by running it on multiple threads .
Erlang really offers nothing especially nice for doing that .
Threads are usually the wrong abstraction for parallelism , for one thing .</tokentext>
<sentencetext>Erlang is for concurrency, not parallelism.The former is about things genuinely happening at the same time (such as server/client communication, different parts of a UI etc.
), the latter is for speeding up a non-concurrent operation by running it on multiple threads.
Erlang really offers nothing especially nice for doing that.
Threads are usually the wrong abstraction for parallelism, for one thing.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243457</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244979</id>
	<title>Re:Old languages designed for parallel processing?</title>
	<author>Col Bat Guano</author>
	<datestamp>1244375820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>The DoD didn't design Ada, they set a number of requirements then held a competition.
<p>
Several design teams consisting of people from industry and academia worked on it.
</p><p>
Subsequent updates to the language (protected types are relevant for this topic) were added in Ada95. The parallel part of the language has remained pretty much stable since (interfaces were added, but it hasn't made a big difference to the parallel section).
</p><p>
What part of parallel programming in Ada don't you like?</p></htmltext>
<tokenext>The DoD did n't design Ada , they set a number of requirements then held a competition .
Several design teams consisting of people from industry and academia worked on it .
Subsequent updates to the language ( protected types are relevant for this topic ) were added in Ada95 .
The parallel part of the language has remained pretty much stable since ( interfaces were added , but it has n't made a big difference to the parallel section ) .
What part of parallel programming in Ada do n't you like ?</tokentext>
<sentencetext>The DoD didn't design Ada, they set a number of requirements then held a competition.
Several design teams consisting of people from industry and academia worked on it.
Subsequent updates to the language (protected types are relevant for this topic) were added in Ada95.
The parallel part of the language has remained pretty much stable since (interfaces were added, but it hasn't made a big difference to the parallel section).
What part of parallel programming in Ada don't you like?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243127</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243425</id>
	<title>Re:What's so hard?</title>
	<author>grumbel</author>
	<datestamp>1244406780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Implementing threading in a new app written from scratch isn't that hard (even so it has quite a bit problems on its own), the real troublesome part is rewriting legacy code that wasn't build for threading, as that often makes a lot of assumptions that simply break in threading.</p></htmltext>
<tokenext>Implementing threading in a new app written from scratch is n't that hard ( even so it has quite a bit problems on its own ) , the real troublesome part is rewriting legacy code that was n't build for threading , as that often makes a lot of assumptions that simply break in threading .</tokentext>
<sentencetext>Implementing threading in a new app written from scratch isn't that hard (even so it has quite a bit problems on its own), the real troublesome part is rewriting legacy code that wasn't build for threading, as that often makes a lot of assumptions that simply break in threading.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243747</id>
	<title>Re:Old languages designed for parallel processing?</title>
	<author>Bearhouse</author>
	<datestamp>1244366400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Before anyone at DARPA thinks that they can design a better language for concurrent parallel programming then I think they should be forced to spend 1 year learning Ada, and a second year working in Ada. If they survive they will most likely be cured of the thought that the Defense department can design good programming languages</p></div><p>Well, it's based on Pascal, so whatya expect?  Still, does work.  (The 777 flight control system is written in it...if it was written in, for example, C or VB, would you get on the 'plane?)</p></div>
	</htmltext>
<tokenext>Before anyone at DARPA thinks that they can design a better language for concurrent parallel programming then I think they should be forced to spend 1 year learning Ada , and a second year working in Ada .
If they survive they will most likely be cured of the thought that the Defense department can design good programming languagesWell , it 's based on Pascal , so whatya expect ?
Still , does work .
( The 777 flight control system is written in it...if it was written in , for example , C or VB , would you get on the 'plane ?
)</tokentext>
<sentencetext>Before anyone at DARPA thinks that they can design a better language for concurrent parallel programming then I think they should be forced to spend 1 year learning Ada, and a second year working in Ada.
If they survive they will most likely be cured of the thought that the Defense department can design good programming languagesWell, it's based on Pascal, so whatya expect?
Still, does work.
(The 777 flight control system is written in it...if it was written in, for example, C or VB, would you get on the 'plane?
)
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243127</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243013</id>
	<title>i'm gay</title>
	<author>Anonymous</author>
	<datestamp>1244403720000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><p>very gay!</p></htmltext>
<tokenext>very gay !</tokentext>
<sentencetext>very gay!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244255</id>
	<title>Re:Old languages designed for parallel processing?</title>
	<author>Anonymous</author>
	<datestamp>1244370660000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Funny, all the hacker type people I know that used to badmouth Ada just as you do after having to learn it at university and that had to use it for real afterward ended up loving the language. And that includes the long haired type<nobr> <wbr></nobr>;)</p><p>Ada is different enough that the initial encounter is rough. But that pass after using the language for a (significant) while, and then you can appreciate the reasons behind the language design decisions.</p><p>And for the curious, the is a "design rationale" document for each version of Ada. Very interesting read.</p></htmltext>
<tokenext>Funny , all the hacker type people I know that used to badmouth Ada just as you do after having to learn it at university and that had to use it for real afterward ended up loving the language .
And that includes the long haired type ; ) Ada is different enough that the initial encounter is rough .
But that pass after using the language for a ( significant ) while , and then you can appreciate the reasons behind the language design decisions.And for the curious , the is a " design rationale " document for each version of Ada .
Very interesting read .</tokentext>
<sentencetext>Funny, all the hacker type people I know that used to badmouth Ada just as you do after having to learn it at university and that had to use it for real afterward ended up loving the language.
And that includes the long haired type ;)Ada is different enough that the initial encounter is rough.
But that pass after using the language for a (significant) while, and then you can appreciate the reasons behind the language design decisions.And for the curious, the is a "design rationale" document for each version of Ada.
Very interesting read.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243127</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28252853</id>
	<title>CSP is the right way to do Multi-Threading.</title>
	<author>ralph.corderoy</author>
	<datestamp>1244484600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Any discussion of parallel programming would benefit from have read and understood the resources and history covered by Russ Cox at <a href="http://swtch.com/~rsc/thread/" title="swtch.com" rel="nofollow">http://swtch.com/~rsc/thread/</a> [swtch.com]</p></htmltext>
<tokenext>Any discussion of parallel programming would benefit from have read and understood the resources and history covered by Russ Cox at http : //swtch.com/ ~ rsc/thread/ [ swtch.com ]</tokentext>
<sentencetext>Any discussion of parallel programming would benefit from have read and understood the resources and history covered by Russ Cox at http://swtch.com/~rsc/thread/ [swtch.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28248205</id>
	<title>Re:Parallel is here to stay but not for every app</title>
	<author>bertok</author>
	<datestamp>1244452200000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>The \% utilization metric is a red herring. Most servers are underutilized by that metric, which is why VMware is making so much money consolidating them!</p><p>Users don't actually notice, or care, about CPU utilization. What users notice, is <i>latency</i>. If my computer is 99\% idle, that's fine, but I want it to respond to mouse clicks in a timely fashion. I don't want to wait, even if it's just a few hundred milliseconds. This is where parallel computation can bring big wins.</p><p>One thing I noticed is that MS SQL Server still has its default "threshold for query parallelism" set to "5", which AFAIK means that if the query planner estimates that a query will take more than 5 <i>seconds</i>, it'll attempt a parallel query plan instead. That's insane! I don't know what kind of users Microsoft is thinking of, but in my world, if a form takes 5 seconds to display, it's way too slow to be considered acceptable. Many servers now have 8 or more cores, and 24 (4x hexacore) is going to be common for database servers very soon. In that picture, even if you only consider a 15x speedup due to overhead, 5 seconds becomes something like 300 milliseconds!</p><p>Ordinary Windows applications can benefit from the same kind of speedup. For example, a huge number of applications use compression internally (all Java JAR files, of the docx-style Office 2007 files, etc...), yet the only parallel compressor I know of is WinRAR, which really does get 4x the speed on my quad-core. Did you know that the average compression rate for a normal algorithm like zip is something like 10MB/sec/core? That's <i>pathetic</i>. A Core i7 with 8 threads could probably do the same thing at 60 MB/sec or more, which is more in line with, say, gigabit ethernet speeds, or a typical hard-drive.</p><p>In other words, for a large class of apps, your hard-drive is not the bottleneck, your CPU is. How pathetic is that? A modern CPU has 4 or more cores, and it's busy hammering just one of those while your hard-drive, a <i>mechanical</i> component, is waiting to send it more data.</p><p>You wait until you get an SSD. Suddenly, a whole range of apps become "cpu limited".</p></htmltext>
<tokenext>The \ % utilization metric is a red herring .
Most servers are underutilized by that metric , which is why VMware is making so much money consolidating them ! Users do n't actually notice , or care , about CPU utilization .
What users notice , is latency .
If my computer is 99 \ % idle , that 's fine , but I want it to respond to mouse clicks in a timely fashion .
I do n't want to wait , even if it 's just a few hundred milliseconds .
This is where parallel computation can bring big wins.One thing I noticed is that MS SQL Server still has its default " threshold for query parallelism " set to " 5 " , which AFAIK means that if the query planner estimates that a query will take more than 5 seconds , it 'll attempt a parallel query plan instead .
That 's insane !
I do n't know what kind of users Microsoft is thinking of , but in my world , if a form takes 5 seconds to display , it 's way too slow to be considered acceptable .
Many servers now have 8 or more cores , and 24 ( 4x hexacore ) is going to be common for database servers very soon .
In that picture , even if you only consider a 15x speedup due to overhead , 5 seconds becomes something like 300 milliseconds ! Ordinary Windows applications can benefit from the same kind of speedup .
For example , a huge number of applications use compression internally ( all Java JAR files , of the docx-style Office 2007 files , etc... ) , yet the only parallel compressor I know of is WinRAR , which really does get 4x the speed on my quad-core .
Did you know that the average compression rate for a normal algorithm like zip is something like 10MB/sec/core ?
That 's pathetic .
A Core i7 with 8 threads could probably do the same thing at 60 MB/sec or more , which is more in line with , say , gigabit ethernet speeds , or a typical hard-drive.In other words , for a large class of apps , your hard-drive is not the bottleneck , your CPU is .
How pathetic is that ?
A modern CPU has 4 or more cores , and it 's busy hammering just one of those while your hard-drive , a mechanical component , is waiting to send it more data.You wait until you get an SSD .
Suddenly , a whole range of apps become " cpu limited " .</tokentext>
<sentencetext>The \% utilization metric is a red herring.
Most servers are underutilized by that metric, which is why VMware is making so much money consolidating them!Users don't actually notice, or care, about CPU utilization.
What users notice, is latency.
If my computer is 99\% idle, that's fine, but I want it to respond to mouse clicks in a timely fashion.
I don't want to wait, even if it's just a few hundred milliseconds.
This is where parallel computation can bring big wins.One thing I noticed is that MS SQL Server still has its default "threshold for query parallelism" set to "5", which AFAIK means that if the query planner estimates that a query will take more than 5 seconds, it'll attempt a parallel query plan instead.
That's insane!
I don't know what kind of users Microsoft is thinking of, but in my world, if a form takes 5 seconds to display, it's way too slow to be considered acceptable.
Many servers now have 8 or more cores, and 24 (4x hexacore) is going to be common for database servers very soon.
In that picture, even if you only consider a 15x speedup due to overhead, 5 seconds becomes something like 300 milliseconds!Ordinary Windows applications can benefit from the same kind of speedup.
For example, a huge number of applications use compression internally (all Java JAR files, of the docx-style Office 2007 files, etc...), yet the only parallel compressor I know of is WinRAR, which really does get 4x the speed on my quad-core.
Did you know that the average compression rate for a normal algorithm like zip is something like 10MB/sec/core?
That's pathetic.
A Core i7 with 8 threads could probably do the same thing at 60 MB/sec or more, which is more in line with, say, gigabit ethernet speeds, or a typical hard-drive.In other words, for a large class of apps, your hard-drive is not the bottleneck, your CPU is.
How pathetic is that?
A modern CPU has 4 or more cores, and it's busy hammering just one of those while your hard-drive, a mechanical component, is waiting to send it more data.You wait until you get an SSD.
Suddenly, a whole range of apps become "cpu limited".</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243603</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245775</id>
	<title>Re:Parallel is here to stay but not for every app</title>
	<author>cryptoluddite</author>
	<datestamp>1244382240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Any program with a "for" or "while" loop in which the results of one iteration do not depend on the results of the previous iteration<nobr> <wbr></nobr>... We just need the languages not to make coding this way too painful.</p></div><p>It's not the languages that are the problem, it's the operating systems.  Check it:</p><p>1. Start 4 threads because OS said there are 4 cores<br>2. Each thread starts doing N/4 iterations of the loop<br>3. Each thread does some kind of synchronization to say it exited.  It could be just a write, but there needs to be some way to wake up the original thread.<br>4. The original thread blocks until all the other threads have completed, then continues operating.</p><p>The best case is that there is lots of overhead creating a new thread, and the other threads have completed before the original one so that it doesn't have to wait.</p><p>The worst case is that the new threads don't get scheduled right away.  Maybe flash is using 100\% CPU on one core, so one thread might not even start running until one of the others completes.  Anyway you end up with several threads not running and being scheduled at some later time.  The for loop ends up taking several times longer even though it's running 'in parallel'.</p><p>Now what about a case like this:</p><p>1. ask os to split thread across multiple CPUs.  This would be a guarantee that the threads would run immediately on the number of CPUs returned by the call.<br>2. run the loops on N cores (however many the OS returns).<br>3. each thread 'exits' when it is done.  For the last thread to exit, the OS returns to it instead of exiting.  Whichever thread finished last continues.</p><p>This doesn't suffer from the threads originally not starting right away and has no real synchronization except in the OS itself.  It's awesome for smallish loops.  It's also impossible to do with current operating systems.</p></div>
	</htmltext>
<tokenext>Any program with a " for " or " while " loop in which the results of one iteration do not depend on the results of the previous iteration ... We just need the languages not to make coding this way too painful.It 's not the languages that are the problem , it 's the operating systems .
Check it : 1 .
Start 4 threads because OS said there are 4 cores2 .
Each thread starts doing N/4 iterations of the loop3 .
Each thread does some kind of synchronization to say it exited .
It could be just a write , but there needs to be some way to wake up the original thread.4 .
The original thread blocks until all the other threads have completed , then continues operating.The best case is that there is lots of overhead creating a new thread , and the other threads have completed before the original one so that it does n't have to wait.The worst case is that the new threads do n't get scheduled right away .
Maybe flash is using 100 \ % CPU on one core , so one thread might not even start running until one of the others completes .
Anyway you end up with several threads not running and being scheduled at some later time .
The for loop ends up taking several times longer even though it 's running 'in parallel'.Now what about a case like this : 1. ask os to split thread across multiple CPUs .
This would be a guarantee that the threads would run immediately on the number of CPUs returned by the call.2 .
run the loops on N cores ( however many the OS returns ) .3. each thread 'exits ' when it is done .
For the last thread to exit , the OS returns to it instead of exiting .
Whichever thread finished last continues.This does n't suffer from the threads originally not starting right away and has no real synchronization except in the OS itself .
It 's awesome for smallish loops .
It 's also impossible to do with current operating systems .</tokentext>
<sentencetext>Any program with a "for" or "while" loop in which the results of one iteration do not depend on the results of the previous iteration ... We just need the languages not to make coding this way too painful.It's not the languages that are the problem, it's the operating systems.
Check it:1.
Start 4 threads because OS said there are 4 cores2.
Each thread starts doing N/4 iterations of the loop3.
Each thread does some kind of synchronization to say it exited.
It could be just a write, but there needs to be some way to wake up the original thread.4.
The original thread blocks until all the other threads have completed, then continues operating.The best case is that there is lots of overhead creating a new thread, and the other threads have completed before the original one so that it doesn't have to wait.The worst case is that the new threads don't get scheduled right away.
Maybe flash is using 100\% CPU on one core, so one thread might not even start running until one of the others completes.
Anyway you end up with several threads not running and being scheduled at some later time.
The for loop ends up taking several times longer even though it's running 'in parallel'.Now what about a case like this:1. ask os to split thread across multiple CPUs.
This would be a guarantee that the threads would run immediately on the number of CPUs returned by the call.2.
run the loops on N cores (however many the OS returns).3. each thread 'exits' when it is done.
For the last thread to exit, the OS returns to it instead of exiting.
Whichever thread finished last continues.This doesn't suffer from the threads originally not starting right away and has no real synchronization except in the OS itself.
It's awesome for smallish loops.
It's also impossible to do with current operating systems.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243111</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28253119</id>
	<title>Re:Parallel is here to stay but not for every app</title>
	<author>sjames</author>
	<datestamp>1244485860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Not to mention the many apps (especially background apps) that won't benefit in the slightest by being parallelized and the many various network server apps that get all the benefit they need using the old naive fork on connect. For those, per process, the network is the bottleneck. Extra cores mean more clients can be served at once without a slowdown but no amount of parallelization of the per-client process would speed things up. In many cases, that will remain true even given 10Gbps networking.</p></htmltext>
<tokenext>Not to mention the many apps ( especially background apps ) that wo n't benefit in the slightest by being parallelized and the many various network server apps that get all the benefit they need using the old naive fork on connect .
For those , per process , the network is the bottleneck .
Extra cores mean more clients can be served at once without a slowdown but no amount of parallelization of the per-client process would speed things up .
In many cases , that will remain true even given 10Gbps networking .</tokentext>
<sentencetext>Not to mention the many apps (especially background apps) that won't benefit in the slightest by being parallelized and the many various network server apps that get all the benefit they need using the old naive fork on connect.
For those, per process, the network is the bottleneck.
Extra cores mean more clients can be served at once without a slowdown but no amount of parallelization of the per-client process would speed things up.
In many cases, that will remain true even given 10Gbps networking.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243335</id>
	<title>Re:What's so hard?</title>
	<author>Unoti</author>
	<datestamp>1244406060000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>The fact that it seems so simple at first is where the problem starts.  You had no trouble in your <b>program</b>.  One program.  That's a great start.  Now do something non-trivial.  Say, make something that simulates digital circuits-- and gates, or gates, not gates.  Let them be wired up together.  Accept an arbitrarily complex setup of digital logic gates.  Have it simulate the outputs propagating to the inputs.  And make it so that it expands across an arbitrary number of threads, and make it expand across an arbitrary number of processes, both on the same computer and on other computers on the same network.</p><p>There are some languages and approaches you could choose for such a project that will help you avoid the kinds of pitfalls that await you, and provide most or all of the infrastructure that you'd have to write yourself in other languages.</p><p>If you're interested in learning more about parallel programming, why it's hard, and what can go wrong, and how to make it easy, I suggest you read a book about <a href="http://www.amazon.com/Programming-Erlang-Software-Concurrent-World/dp/193435600X" title="amazon.com">Erlang</a> [amazon.com].  Then read a book about <a href="http://www.amazon.com/Programming-Scala-Comprehensive-Step-step/dp/0981531601/ref=sr\_1\_1?ie=UTF8&amp;s=books&amp;qid=1244402328&amp;sr=1-1" title="amazon.com">Scala.</a> [amazon.com]</p><p>The thing is, it looks easy at first, and it really is easy at first.  Then you launch your application into production, and stuff goes real funny and it's nigh unto impossible to troubleshoot what's wrong.  In the lab, it's always easy.  With multithreaded/multiprocess/multi-node systems, you've got to work very very hard to make them mess up in the lab the same way they will in the real world.  So it seems like not a big deal at first until you launch the stuff and have to support it running every day in crazy unpredictable conditions.</p></htmltext>
<tokenext>The fact that it seems so simple at first is where the problem starts .
You had no trouble in your program .
One program .
That 's a great start .
Now do something non-trivial .
Say , make something that simulates digital circuits-- and gates , or gates , not gates .
Let them be wired up together .
Accept an arbitrarily complex setup of digital logic gates .
Have it simulate the outputs propagating to the inputs .
And make it so that it expands across an arbitrary number of threads , and make it expand across an arbitrary number of processes , both on the same computer and on other computers on the same network.There are some languages and approaches you could choose for such a project that will help you avoid the kinds of pitfalls that await you , and provide most or all of the infrastructure that you 'd have to write yourself in other languages.If you 're interested in learning more about parallel programming , why it 's hard , and what can go wrong , and how to make it easy , I suggest you read a book about Erlang [ amazon.com ] .
Then read a book about Scala .
[ amazon.com ] The thing is , it looks easy at first , and it really is easy at first .
Then you launch your application into production , and stuff goes real funny and it 's nigh unto impossible to troubleshoot what 's wrong .
In the lab , it 's always easy .
With multithreaded/multiprocess/multi-node systems , you 've got to work very very hard to make them mess up in the lab the same way they will in the real world .
So it seems like not a big deal at first until you launch the stuff and have to support it running every day in crazy unpredictable conditions .</tokentext>
<sentencetext>The fact that it seems so simple at first is where the problem starts.
You had no trouble in your program.
One program.
That's a great start.
Now do something non-trivial.
Say, make something that simulates digital circuits-- and gates, or gates, not gates.
Let them be wired up together.
Accept an arbitrarily complex setup of digital logic gates.
Have it simulate the outputs propagating to the inputs.
And make it so that it expands across an arbitrary number of threads, and make it expand across an arbitrary number of processes, both on the same computer and on other computers on the same network.There are some languages and approaches you could choose for such a project that will help you avoid the kinds of pitfalls that await you, and provide most or all of the infrastructure that you'd have to write yourself in other languages.If you're interested in learning more about parallel programming, why it's hard, and what can go wrong, and how to make it easy, I suggest you read a book about Erlang [amazon.com].
Then read a book about Scala.
[amazon.com]The thing is, it looks easy at first, and it really is easy at first.
Then you launch your application into production, and stuff goes real funny and it's nigh unto impossible to troubleshoot what's wrong.
In the lab, it's always easy.
With multithreaded/multiprocess/multi-node systems, you've got to work very very hard to make them mess up in the lab the same way they will in the real world.
So it seems like not a big deal at first until you launch the stuff and have to support it running every day in crazy unpredictable conditions.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28247963</id>
	<title>Re:What's so hard?</title>
	<author>Anonymous</author>
	<datestamp>1244493300000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>IN C#/.net 4.0 it will not be much longer. Use Parallel.For() for that.</p></htmltext>
<tokenext>IN C # /.net 4.0 it will not be much longer .
Use Parallel.For ( ) for that .</tokentext>
<sentencetext>IN C#/.net 4.0 it will not be much longer.
Use Parallel.For() for that.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243181</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243457</id>
	<title>Re:Old languages designed for parallel processing?</title>
	<author>Anonymous</author>
	<datestamp>1244406960000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext>Erlang is probably the best language for servers and similar applications available. Not only is in inherently parallel (though they've only recently actually made the engine multithreaded, as the paralellism is in the software), but it is very easily networked as well. As a result, a well-written Erlang program can only be taken down by simultaneously killing an entire cluster of computers.<br> <br>

What's more, it has a little-seen feature of being able to handle code upgrades to most any component of the program without ever stopping - it keeps two versions of each module (old and new) in memory, and code can be written to automatically ensure a smooth transition into the new code when the upgrade occurs.<br> <br>

If I recall correctly, the Swedish telecom where Erlang was designed had one server running it with <i>7 continuous years uptime</i>.</htmltext>
<tokenext>Erlang is probably the best language for servers and similar applications available .
Not only is in inherently parallel ( though they 've only recently actually made the engine multithreaded , as the paralellism is in the software ) , but it is very easily networked as well .
As a result , a well-written Erlang program can only be taken down by simultaneously killing an entire cluster of computers .
What 's more , it has a little-seen feature of being able to handle code upgrades to most any component of the program without ever stopping - it keeps two versions of each module ( old and new ) in memory , and code can be written to automatically ensure a smooth transition into the new code when the upgrade occurs .
If I recall correctly , the Swedish telecom where Erlang was designed had one server running it with 7 continuous years uptime .</tokentext>
<sentencetext>Erlang is probably the best language for servers and similar applications available.
Not only is in inherently parallel (though they've only recently actually made the engine multithreaded, as the paralellism is in the software), but it is very easily networked as well.
As a result, a well-written Erlang program can only be taken down by simultaneously killing an entire cluster of computers.
What's more, it has a little-seen feature of being able to handle code upgrades to most any component of the program without ever stopping - it keeps two versions of each module (old and new) in memory, and code can be written to automatically ensure a smooth transition into the new code when the upgrade occurs.
If I recall correctly, the Swedish telecom where Erlang was designed had one server running it with 7 continuous years uptime.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243127</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28251653</id>
	<title>Parallel Processing - see FPGA</title>
	<author>surdumil</author>
	<datestamp>1244479140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I think that the whole parallel processing notion is being stood on its head by logic developers working on embedded FPGA apps.

The languages of choice are mostly based on VHDL and verilog.

The processors involved can be DSP hardware blocks, embedded processors (implemented hardcore or softcore logic), and an arbitrary number of custom state machines, all operating independently or in various locked-step arrangements.  Communications between processors are defined as required.

Latest FPGAs offer a couple of thousand DSP blocks implemented in hard logic, all with localized memory stores.  They can be arbitrarily grouped and/or can run independently, and can be clocked at different rates according to requirements.  The resulting parallel processing power and versatility is astounding.</htmltext>
<tokenext>I think that the whole parallel processing notion is being stood on its head by logic developers working on embedded FPGA apps .
The languages of choice are mostly based on VHDL and verilog .
The processors involved can be DSP hardware blocks , embedded processors ( implemented hardcore or softcore logic ) , and an arbitrary number of custom state machines , all operating independently or in various locked-step arrangements .
Communications between processors are defined as required .
Latest FPGAs offer a couple of thousand DSP blocks implemented in hard logic , all with localized memory stores .
They can be arbitrarily grouped and/or can run independently , and can be clocked at different rates according to requirements .
The resulting parallel processing power and versatility is astounding .</tokentext>
<sentencetext>I think that the whole parallel processing notion is being stood on its head by logic developers working on embedded FPGA apps.
The languages of choice are mostly based on VHDL and verilog.
The processors involved can be DSP hardware blocks, embedded processors (implemented hardcore or softcore logic), and an arbitrary number of custom state machines, all operating independently or in various locked-step arrangements.
Communications between processors are defined as required.
Latest FPGAs offer a couple of thousand DSP blocks implemented in hard logic, all with localized memory stores.
They can be arbitrarily grouped and/or can run independently, and can be clocked at different rates according to requirements.
The resulting parallel processing power and versatility is astounding.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245167</id>
	<title>Re:Parallel is here to stay but not for every app</title>
	<author>zuperduperman</author>
	<datestamp>1244377320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I think you overlook cause and effect here, and suffer from a lack of imagination.   The fact is, we have largely not developed applications that can use parallel processing because we haven't had parallel processing to do it with.   Many existing design patterns (eg: MVC or Document / View) are actually  contortions of parallel algorithms to deal with the fact that our programming paradigms are not parallel.   Once mainstream languages incorporate parallel processing in their core as primitives we may think very differently about many existing computing problems, and probably will see very different designs emerge, and whole new classes of application that we never thought of before.</p></htmltext>
<tokenext>I think you overlook cause and effect here , and suffer from a lack of imagination .
The fact is , we have largely not developed applications that can use parallel processing because we have n't had parallel processing to do it with .
Many existing design patterns ( eg : MVC or Document / View ) are actually contortions of parallel algorithms to deal with the fact that our programming paradigms are not parallel .
Once mainstream languages incorporate parallel processing in their core as primitives we may think very differently about many existing computing problems , and probably will see very different designs emerge , and whole new classes of application that we never thought of before .</tokentext>
<sentencetext>I think you overlook cause and effect here, and suffer from a lack of imagination.
The fact is, we have largely not developed applications that can use parallel processing because we haven't had parallel processing to do it with.
Many existing design patterns (eg: MVC or Document / View) are actually  contortions of parallel algorithms to deal with the fact that our programming paradigms are not parallel.
Once mainstream languages incorporate parallel processing in their core as primitives we may think very differently about many existing computing problems, and probably will see very different designs emerge, and whole new classes of application that we never thought of before.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243091</id>
	<title>Re:Parallel is here to stay but not for every app</title>
	<author>Nursie</author>
	<datestamp>1244404320000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>How blinkered are you?</p><p>There exist whole classes of software that have been doing parallel execution, be it through threads, processes or messaging, for decades.</p><p>Look at any/all server software, for god's sake, look at apache, or any database, or any transaction engine.</p><p>If you're talking about <b>desktop</b> apps then make it clear. The thing with most of those is that the machines far exceed their requirements with a single core, most of the time. But stuff like video encoding has been threaded for a while too.</p></htmltext>
<tokenext>How blinkered are you ? There exist whole classes of software that have been doing parallel execution , be it through threads , processes or messaging , for decades.Look at any/all server software , for god 's sake , look at apache , or any database , or any transaction engine.If you 're talking about desktop apps then make it clear .
The thing with most of those is that the machines far exceed their requirements with a single core , most of the time .
But stuff like video encoding has been threaded for a while too .</tokentext>
<sentencetext>How blinkered are you?There exist whole classes of software that have been doing parallel execution, be it through threads, processes or messaging, for decades.Look at any/all server software, for god's sake, look at apache, or any database, or any transaction engine.If you're talking about desktop apps then make it clear.
The thing with most of those is that the machines far exceed their requirements with a single core, most of the time.
But stuff like video encoding has been threaded for a while too.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069</id>
	<title>What's so hard?</title>
	<author>4D6963</author>
	<datestamp>1244404080000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Not trying to troll or anything, but I'd always hear of how parallel programming is very complicated for programmers, but then I learnt to use pthread in C to parallelise everything in my C program from parallel concurrent processing of the same things to threading any aspect of the program, and I was surprised by how simple and straightforward it was using pthread, even creating a number of threads depending on the number of detected cores was simple.

</p><p>OK, maybe what I did was simple enough, but I just don't see what's so inherently hard about parellel programming. Surely I am missing something.</p></htmltext>
<tokenext>Not trying to troll or anything , but I 'd always hear of how parallel programming is very complicated for programmers , but then I learnt to use pthread in C to parallelise everything in my C program from parallel concurrent processing of the same things to threading any aspect of the program , and I was surprised by how simple and straightforward it was using pthread , even creating a number of threads depending on the number of detected cores was simple .
OK , maybe what I did was simple enough , but I just do n't see what 's so inherently hard about parellel programming .
Surely I am missing something .</tokentext>
<sentencetext>Not trying to troll or anything, but I'd always hear of how parallel programming is very complicated for programmers, but then I learnt to use pthread in C to parallelise everything in my C program from parallel concurrent processing of the same things to threading any aspect of the program, and I was surprised by how simple and straightforward it was using pthread, even creating a number of threads depending on the number of detected cores was simple.
OK, maybe what I did was simple enough, but I just don't see what's so inherently hard about parellel programming.
Surely I am missing something.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244365</id>
	<title>Re:What's so hard?</title>
	<author>4D6963</author>
	<datestamp>1244371560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p> <i>How does that add data to the buffer in a way that doesn't conflict with the physics thread?</i> </p><p>

Well, I don't think that's that complicated actually. Make the physics thread get all the data from the AI thread all at once, let it perform its little loop, let it take AI data again, repeat... That's why I said it seems simple, of course it's not always simple, but as far as my limited experience goes all you have to do is find an elegant way to do this on paper and it all works out. I for one am not a big fan of locking, I try as much as possible to use mutex free code and just design things so that threads can keep doing their thing blindly without ever waiting. Well perhaps that's not always possible, but I think it's possible in most cases.

</p><p>As for the physics-rendering problem, I think the most elegant solution is to use double buffering as you said, but to avoid the problem of the rendering loop starting just before the physics one ends, perhaps you can use some time measurement to determine whether the rendering loop should start or if it's late enough in the physics loop for it to wait for it.

</p><p>I for one have never met any problems with debugging using GDB, and I think that if you get deadlocks on quad cores then there's something wrong about your design to begin with, i.e. you didn't plan for N cores correctly.

</p><p>This being said you're absolutely right about it being easy if you design it from the ground up. Actually I'd consider turning a complete single threaded program into a parallelised program to be madness, not that you always have the choice though..</p></htmltext>
<tokenext>How does that add data to the buffer in a way that does n't conflict with the physics thread ?
Well , I do n't think that 's that complicated actually .
Make the physics thread get all the data from the AI thread all at once , let it perform its little loop , let it take AI data again , repeat... That 's why I said it seems simple , of course it 's not always simple , but as far as my limited experience goes all you have to do is find an elegant way to do this on paper and it all works out .
I for one am not a big fan of locking , I try as much as possible to use mutex free code and just design things so that threads can keep doing their thing blindly without ever waiting .
Well perhaps that 's not always possible , but I think it 's possible in most cases .
As for the physics-rendering problem , I think the most elegant solution is to use double buffering as you said , but to avoid the problem of the rendering loop starting just before the physics one ends , perhaps you can use some time measurement to determine whether the rendering loop should start or if it 's late enough in the physics loop for it to wait for it .
I for one have never met any problems with debugging using GDB , and I think that if you get deadlocks on quad cores then there 's something wrong about your design to begin with , i.e .
you did n't plan for N cores correctly .
This being said you 're absolutely right about it being easy if you design it from the ground up .
Actually I 'd consider turning a complete single threaded program into a parallelised program to be madness , not that you always have the choice though. .</tokentext>
<sentencetext> How does that add data to the buffer in a way that doesn't conflict with the physics thread?
Well, I don't think that's that complicated actually.
Make the physics thread get all the data from the AI thread all at once, let it perform its little loop, let it take AI data again, repeat... That's why I said it seems simple, of course it's not always simple, but as far as my limited experience goes all you have to do is find an elegant way to do this on paper and it all works out.
I for one am not a big fan of locking, I try as much as possible to use mutex free code and just design things so that threads can keep doing their thing blindly without ever waiting.
Well perhaps that's not always possible, but I think it's possible in most cases.
As for the physics-rendering problem, I think the most elegant solution is to use double buffering as you said, but to avoid the problem of the rendering loop starting just before the physics one ends, perhaps you can use some time measurement to determine whether the rendering loop should start or if it's late enough in the physics loop for it to wait for it.
I for one have never met any problems with debugging using GDB, and I think that if you get deadlocks on quad cores then there's something wrong about your design to begin with, i.e.
you didn't plan for N cores correctly.
This being said you're absolutely right about it being easy if you design it from the ground up.
Actually I'd consider turning a complete single threaded program into a parallelised program to be madness, not that you always have the choice though..</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243369</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243827</id>
	<title>Re:What's so hard?</title>
	<author>jbolden</author>
	<datestamp>1244367360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>An example is the  locking problem on variables that are shared.  Which variables get locked, for how long?  How does the lock get released?    To many locks you run sequentially, too few you corrupt your threads.</p></htmltext>
<tokenext>An example is the locking problem on variables that are shared .
Which variables get locked , for how long ?
How does the lock get released ?
To many locks you run sequentially , too few you corrupt your threads .</tokentext>
<sentencetext>An example is the  locking problem on variables that are shared.
Which variables get locked, for how long?
How does the lock get released?
To many locks you run sequentially, too few you corrupt your threads.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987</id>
	<title>Parallel is here to stay but not for every app</title>
	<author>Anonymous</author>
	<datestamp>1244403540000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext>Parallel is not going to go anywhere but is only really valid for certain types if applications.  Larger items like operating systems or most system tasks need it.  Whether it is worthwhile in lowly application land is a case by case decision; but will mostly depend on the skill of programmers involved and the budget for the particular application in question.</htmltext>
<tokenext>Parallel is not going to go anywhere but is only really valid for certain types if applications .
Larger items like operating systems or most system tasks need it .
Whether it is worthwhile in lowly application land is a case by case decision ; but will mostly depend on the skill of programmers involved and the budget for the particular application in question .</tokentext>
<sentencetext>Parallel is not going to go anywhere but is only really valid for certain types if applications.
Larger items like operating systems or most system tasks need it.
Whether it is worthwhile in lowly application land is a case by case decision; but will mostly depend on the skill of programmers involved and the budget for the particular application in question.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243803</id>
	<title>Re:Parallel programming is dead. No one uses it...</title>
	<author>Vanders</author>
	<datestamp>1244367060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>Threading i don't count as parallel processing for the desktop.</p></div></blockquote><p>
Multiple threads on a single CPU may not be parallel, but the moment you add more than one core, of course it is parallel.</p></div>
	</htmltext>
<tokenext>Threading i do n't count as parallel processing for the desktop .
Multiple threads on a single CPU may not be parallel , but the moment you add more than one core , of course it is parallel .</tokentext>
<sentencetext>Threading i don't count as parallel processing for the desktop.
Multiple threads on a single CPU may not be parallel, but the moment you add more than one core, of course it is parallel.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243023</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244749</id>
	<title>Re:Parallel is here to stay but not for every app</title>
	<author>Anonymous</author>
	<datestamp>1244374260000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Holy crap - have you been on the web lately?  Facebook + cnn + foxnews + huffingtonpost + (now) slashdot + hulu maxes out my CPUs to 150\% (1.5 cpus) doing absolutely nothing.  All that GD flash and background JS.  To say nothing of their memory leaks (my laptop becomes useless if I leave these open overnight)</p></htmltext>
<tokenext>Holy crap - have you been on the web lately ?
Facebook + cnn + foxnews + huffingtonpost + ( now ) slashdot + hulu maxes out my CPUs to 150 \ % ( 1.5 cpus ) doing absolutely nothing .
All that GD flash and background JS .
To say nothing of their memory leaks ( my laptop becomes useless if I leave these open overnight )</tokentext>
<sentencetext>Holy crap - have you been on the web lately?
Facebook + cnn + foxnews + huffingtonpost + (now) slashdot + hulu maxes out my CPUs to 150\% (1.5 cpus) doing absolutely nothing.
All that GD flash and background JS.
To say nothing of their memory leaks (my laptop becomes useless if I leave these open overnight)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243603</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28249785</id>
	<title>Interesting article</title>
	<author>Banador</author>
	<datestamp>1244469240000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>Threads Cannot be Implemented as a Library. That means pthreads is bad. Read: <a href="http://www.hpl.hp.com/techreports/2004/HPL-2004-209.pdf" title="hp.com" rel="nofollow">http://www.hpl.hp.com/techreports/2004/HPL-2004-209.pdf</a> [hp.com] <br> <br>

Then after a few years, work on Java memory model has found a good solution. Read: Foundations of the C++ concurrency memory model [based on the Java memory model]

<a href="http://www.hpl.hp.com/techreports/2008/HPL-2008-56.pdf" title="hp.com" rel="nofollow">http://www.hpl.hp.com/techreports/2008/HPL-2008-56.pdf</a> [hp.com] <br> <br>

How fugly can this be for all you C++ wannabe fanguys??? (Phun intended!)</htmltext>
<tokenext>Threads Can not be Implemented as a Library .
That means pthreads is bad .
Read : http : //www.hpl.hp.com/techreports/2004/HPL-2004-209.pdf [ hp.com ] Then after a few years , work on Java memory model has found a good solution .
Read : Foundations of the C + + concurrency memory model [ based on the Java memory model ] http : //www.hpl.hp.com/techreports/2008/HPL-2008-56.pdf [ hp.com ] How fugly can this be for all you C + + wannabe fanguys ? ? ?
( Phun intended !
)</tokentext>
<sentencetext>Threads Cannot be Implemented as a Library.
That means pthreads is bad.
Read: http://www.hpl.hp.com/techreports/2004/HPL-2004-209.pdf [hp.com]  

Then after a few years, work on Java memory model has found a good solution.
Read: Foundations of the C++ concurrency memory model [based on the Java memory model]

http://www.hpl.hp.com/techreports/2008/HPL-2008-56.pdf [hp.com]  

How fugly can this be for all you C++ wannabe fanguys???
(Phun intended!
)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243211</id>
	<title>Re:What's so hard?</title>
	<author>ponraul</author>
	<datestamp>1244405040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Yeah. It's easy creating threads.</p><p>However, if the threads then need to share data with among themselves, or one class of threads has to wait for another class of threads, or you have threads competing for finite resources, or you need to share state among different threads it's not so easy.</p><p>The hard part comes from using threads correctly. In school, they teach you how to prove a particular piece of concurrent code cannot deadlock. In real life, you have to be risk adverse.</p><p>Better concurrency primitives make for less error prone concurrent code. However, better concurrency primitives are generally more restrictive in the kinds of problems they can easily solve. Something that might be trivial to do with semaphores isn't so easy with monitors.</p></htmltext>
<tokenext>Yeah .
It 's easy creating threads.However , if the threads then need to share data with among themselves , or one class of threads has to wait for another class of threads , or you have threads competing for finite resources , or you need to share state among different threads it 's not so easy.The hard part comes from using threads correctly .
In school , they teach you how to prove a particular piece of concurrent code can not deadlock .
In real life , you have to be risk adverse.Better concurrency primitives make for less error prone concurrent code .
However , better concurrency primitives are generally more restrictive in the kinds of problems they can easily solve .
Something that might be trivial to do with semaphores is n't so easy with monitors .</tokentext>
<sentencetext>Yeah.
It's easy creating threads.However, if the threads then need to share data with among themselves, or one class of threads has to wait for another class of threads, or you have threads competing for finite resources, or you need to share state among different threads it's not so easy.The hard part comes from using threads correctly.
In school, they teach you how to prove a particular piece of concurrent code cannot deadlock.
In real life, you have to be risk adverse.Better concurrency primitives make for less error prone concurrent code.
However, better concurrency primitives are generally more restrictive in the kinds of problems they can easily solve.
Something that might be trivial to do with semaphores isn't so easy with monitors.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243199</id>
	<title>Established vs new programming languages for HPC</title>
	<author>Anonymous</author>
	<datestamp>1244404980000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>This is a subject near and dear to my heart. I got to participate in one of the early X10 alpha tests (my research group was asked to try it out and give feedback to Vivek Sarker's IBM team). Since then, I've worked with lots of other specialized programming HPC programming languages.</p><p>One extremely important aspect of supercomputing, a point that many people fail to grasp, is that application code tends to live a long, long, long time. Far longer than the machines themselves. Rewriting code is simply too expensive and economically inefficient. At Los Alamos National Lab, much of the source code they run are nuclear simulations written Fortran 77 or Fortran 90. Someone might have updated it to use MPI, but otherwise it's the same program. So it's important to bear in mind that those older languages, while not nearly as well suited for parallelism (either for programmer ease-of-use/effeciency, or to allow the compiler to do deep analysis/optimization/scheduling), are going to be around for a long time yet.</p></htmltext>
<tokenext>This is a subject near and dear to my heart .
I got to participate in one of the early X10 alpha tests ( my research group was asked to try it out and give feedback to Vivek Sarker 's IBM team ) .
Since then , I 've worked with lots of other specialized programming HPC programming languages.One extremely important aspect of supercomputing , a point that many people fail to grasp , is that application code tends to live a long , long , long time .
Far longer than the machines themselves .
Rewriting code is simply too expensive and economically inefficient .
At Los Alamos National Lab , much of the source code they run are nuclear simulations written Fortran 77 or Fortran 90 .
Someone might have updated it to use MPI , but otherwise it 's the same program .
So it 's important to bear in mind that those older languages , while not nearly as well suited for parallelism ( either for programmer ease-of-use/effeciency , or to allow the compiler to do deep analysis/optimization/scheduling ) , are going to be around for a long time yet .</tokentext>
<sentencetext>This is a subject near and dear to my heart.
I got to participate in one of the early X10 alpha tests (my research group was asked to try it out and give feedback to Vivek Sarker's IBM team).
Since then, I've worked with lots of other specialized programming HPC programming languages.One extremely important aspect of supercomputing, a point that many people fail to grasp, is that application code tends to live a long, long, long time.
Far longer than the machines themselves.
Rewriting code is simply too expensive and economically inefficient.
At Los Alamos National Lab, much of the source code they run are nuclear simulations written Fortran 77 or Fortran 90.
Someone might have updated it to use MPI, but otherwise it's the same program.
So it's important to bear in mind that those older languages, while not nearly as well suited for parallelism (either for programmer ease-of-use/effeciency, or to allow the compiler to do deep analysis/optimization/scheduling), are going to be around for a long time yet.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245895</id>
	<title>Re:Parallel is here to stay but not for every app</title>
	<author>Eskarel</author>
	<datestamp>1244383620000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>While technically most servers are somewhat parallel in nature, it isn't really the same sort of thing that these sorts of languages are designed to achieve.</p><p>Servers, for the most part, are parallel because they have to be able to handle a lot of requests simultaneously, so they spin off a new thread or process(depending on the architecture) for each request to the server, do some relatively simple concurrency checking and then run each request, for the most part, in serial. They're parallel because the task being performed is parallel, a web server that could only return the page to one person at a time wouldn't work,  not to make them faster or more efficient, or to take advantage of multiple processors. This kind of parallel architecture is relatively simple, because the architecture is defined by the requirements of the project and you just have to ensure that it works.</p><p>Taking something like a video encoder, PC game, compiler, etc and making it run in parallel so that it's faster and can take advantage of modern hardware is a totally different kettle of fish. You have to redesign your core idea so that it works in parallel, you have to turn one discrete task into two or more which can be run at the same time. It's a whole different challenge and one which very few programmers(myself included) seem to be prepared to meet.</p></htmltext>
<tokenext>While technically most servers are somewhat parallel in nature , it is n't really the same sort of thing that these sorts of languages are designed to achieve.Servers , for the most part , are parallel because they have to be able to handle a lot of requests simultaneously , so they spin off a new thread or process ( depending on the architecture ) for each request to the server , do some relatively simple concurrency checking and then run each request , for the most part , in serial .
They 're parallel because the task being performed is parallel , a web server that could only return the page to one person at a time would n't work , not to make them faster or more efficient , or to take advantage of multiple processors .
This kind of parallel architecture is relatively simple , because the architecture is defined by the requirements of the project and you just have to ensure that it works.Taking something like a video encoder , PC game , compiler , etc and making it run in parallel so that it 's faster and can take advantage of modern hardware is a totally different kettle of fish .
You have to redesign your core idea so that it works in parallel , you have to turn one discrete task into two or more which can be run at the same time .
It 's a whole different challenge and one which very few programmers ( myself included ) seem to be prepared to meet .</tokentext>
<sentencetext>While technically most servers are somewhat parallel in nature, it isn't really the same sort of thing that these sorts of languages are designed to achieve.Servers, for the most part, are parallel because they have to be able to handle a lot of requests simultaneously, so they spin off a new thread or process(depending on the architecture) for each request to the server, do some relatively simple concurrency checking and then run each request, for the most part, in serial.
They're parallel because the task being performed is parallel, a web server that could only return the page to one person at a time wouldn't work,  not to make them faster or more efficient, or to take advantage of multiple processors.
This kind of parallel architecture is relatively simple, because the architecture is defined by the requirements of the project and you just have to ensure that it works.Taking something like a video encoder, PC game, compiler, etc and making it run in parallel so that it's faster and can take advantage of modern hardware is a totally different kettle of fish.
You have to redesign your core idea so that it works in parallel, you have to turn one discrete task into two or more which can be run at the same time.
It's a whole different challenge and one which very few programmers(myself included) seem to be prepared to meet.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243091</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243699</id>
	<title>Re:Awful example in the article</title>
	<author>redfood</author>
	<datestamp>1244365740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>This is just a simple example to show how hard it is to keep data consistent across different processes.  A better example would have been to think about a shared bank account account where one person might be withdrawing money and the other might be checking the balance at the same time.</htmltext>
<tokenext>This is just a simple example to show how hard it is to keep data consistent across different processes .
A better example would have been to think about a shared bank account account where one person might be withdrawing money and the other might be checking the balance at the same time .</tokentext>
<sentencetext>This is just a simple example to show how hard it is to keep data consistent across different processes.
A better example would have been to think about a shared bank account account where one person might be withdrawing money and the other might be checking the balance at the same time.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243129</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243861</id>
	<title>Chapel</title>
	<author>jbolden</author>
	<datestamp>1244367660000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Looking at the 99 bottles Chapel code (from original article)<br><a href="http://99-bottles-of-beer.net/language-chapel-1215.html" title="99-bottles-of-beer.net">http://99-bottles-of-beer.net/language-chapel-1215.html</a> [99-bottles-of-beer.net]</p><p>This looks like the way you do stuff in Haskell.  Functions compute the data and the I/O routine is moved into a "monad" where you need to sequence.  This doesn't seem outside the realm of the possible.</p></htmltext>
<tokenext>Looking at the 99 bottles Chapel code ( from original article ) http : //99-bottles-of-beer.net/language-chapel-1215.html [ 99-bottles-of-beer.net ] This looks like the way you do stuff in Haskell .
Functions compute the data and the I/O routine is moved into a " monad " where you need to sequence .
This does n't seem outside the realm of the possible .</tokentext>
<sentencetext>Looking at the 99 bottles Chapel code (from original article)http://99-bottles-of-beer.net/language-chapel-1215.html [99-bottles-of-beer.net]This looks like the way you do stuff in Haskell.
Functions compute the data and the I/O routine is moved into a "monad" where you need to sequence.
This doesn't seem outside the realm of the possible.</sentencetext>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_52</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243923
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28246031
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243023
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243657
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243335
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243223
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243603
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245897
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243111
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245775
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245167
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_37</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243023
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245931
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28246697
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_68</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242943
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28246123
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_70</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243129
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243709
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_39</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243127
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244979
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_44</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243223
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243603
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28252459
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_72</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243091
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243275
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_67</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28253119
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_58</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243127
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243457
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28246945
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243533
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244479
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242943
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28252917
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243165
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245959
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243483
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244103
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243129
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243379
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_73</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243855
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_59</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244173
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244389
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_66</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243223
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243603
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28252655
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_49</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243827
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243151
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28247343
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_56</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243129
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243351
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243111
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28246683
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28248703
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_61</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243173
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243469
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243023
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243289
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245487
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243127
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244255
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28248249
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243023
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243487
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28258055
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_48</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243023
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243175
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28251875
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244031
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_53</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243223
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243603
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28251871
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_38</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242943
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28249225
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_60</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242943
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28249829
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_43</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243223
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243603
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28248433
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243091
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28275815
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243181
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243355
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245553
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_50</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243129
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243699
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243091
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245895
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_74</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243173
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244623
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28247277
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_35</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243211
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243165
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28251239
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_51</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243023
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243803
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_42</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243023
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243467
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_65</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243223
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243525
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_41</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243369
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244365
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243425
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243223
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243603
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28248205
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_71</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28251295
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243349
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_57</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243181
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243419
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243359
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244261
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243181
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28247963
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_64</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243181
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28246169
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_47</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243181
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243701
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_40</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243127
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243747
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_63</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243181
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244499
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243181
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243371
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_54</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242943
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245833
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28250739
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243181
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243665
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243023
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243137
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28249271
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243127
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28253499
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28253359
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_55</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243127
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243689
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_46</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245685
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_69</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243109
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_62</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242943
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28299413
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_45</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243223
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243603
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244749
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_36</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243023
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243507
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_07_184205_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243127
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243457
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28266013
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_07_184205.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243069
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243425
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243827
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243855
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28253359
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243369
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244365
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243109
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243533
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244479
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243165
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28251239
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245959
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28251295
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28247343
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243211
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243181
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243371
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243419
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28247963
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243355
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245553
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243665
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243701
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28246169
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244499
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244031
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243335
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244389
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245685
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243151
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28249271
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243359
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244261
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243349
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_07_184205.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243173
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243469
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244623
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28247277
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_07_184205.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243189
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_07_184205.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243129
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243379
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243699
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243709
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243351
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_07_184205.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242955
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_07_184205.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28247157
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_07_184205.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243199
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_07_184205.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243153
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_07_184205.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242987
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28253119
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244173
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28246697
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243223
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243525
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243603
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28248433
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28252655
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244749
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28252459
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245897
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28251871
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28248205
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245167
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243091
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28275815
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243275
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245895
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243111
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245775
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28246683
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28248703
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243483
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244103
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_07_184205.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243127
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244979
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243747
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28253499
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243457
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28246945
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28266013
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28244255
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28248249
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243689
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_07_184205.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243023
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243289
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245487
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243657
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245931
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243507
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243137
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243487
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28258055
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243467
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243175
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28251875
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243803
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_07_184205.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28243923
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28246031
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_07_184205.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28242943
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28249225
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28246123
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28245833
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28250739
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28249829
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28299413
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_07_184205.28252917
</commentlist>
</conversation>
