<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article10_03_21_2345243</id>
	<title>Multicore Requires OS Rework, Windows Expert Says</title>
	<author>timothy</author>
	<datestamp>1269172080000</datestamp>
	<htmltext>alphadogg writes <i>"With chip makers continuing to increase the number of cores they include on each new generation of their processors, perhaps it's <a href="http://www.networkworld.com/news/2010/031910-multicore-requires-os-rework-windows.html">time to rethink the basic architecture of today's operating systems</a>, suggested Dave Probert, a kernel architect within the Windows core operating systems division at Microsoft. The current approach to harnessing the power of multicore processors is complicated and not entirely successful, he argued. The key may not be in throwing more energy into refining techniques such as parallel programming, but rather rethinking the basic abstractions that make up the operating systems model. Today's computers don't get enough performance out of their multicore chips, Probert said. 'Why should you ever, with all this parallel hardware, ever be waiting for your computer?' he asked. Probert made his presentation at the University of Illinois at Urbana-Champaign's Universal Parallel Computing Research Center."</i></htmltext>
<tokenext>alphadogg writes " With chip makers continuing to increase the number of cores they include on each new generation of their processors , perhaps it 's time to rethink the basic architecture of today 's operating systems , suggested Dave Probert , a kernel architect within the Windows core operating systems division at Microsoft .
The current approach to harnessing the power of multicore processors is complicated and not entirely successful , he argued .
The key may not be in throwing more energy into refining techniques such as parallel programming , but rather rethinking the basic abstractions that make up the operating systems model .
Today 's computers do n't get enough performance out of their multicore chips , Probert said .
'Why should you ever , with all this parallel hardware , ever be waiting for your computer ?
' he asked .
Probert made his presentation at the University of Illinois at Urbana-Champaign 's Universal Parallel Computing Research Center .
"</tokentext>
<sentencetext>alphadogg writes "With chip makers continuing to increase the number of cores they include on each new generation of their processors, perhaps it's time to rethink the basic architecture of today's operating systems, suggested Dave Probert, a kernel architect within the Windows core operating systems division at Microsoft.
The current approach to harnessing the power of multicore processors is complicated and not entirely successful, he argued.
The key may not be in throwing more energy into refining techniques such as parallel programming, but rather rethinking the basic abstractions that make up the operating systems model.
Today's computers don't get enough performance out of their multicore chips, Probert said.
'Why should you ever, with all this parallel hardware, ever be waiting for your computer?
' he asked.
Probert made his presentation at the University of Illinois at Urbana-Champaign's Universal Parallel Computing Research Center.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562206</id>
	<title>Re:reinventing the wheel</title>
	<author>Anonymous</author>
	<datestamp>1269180600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>They already kissed and made up over Dave Cutler making NT a little too similar to VMS. See conclusion at and of article.

<a href="http://everything2.com/title/The+similarities+between+VMS+and+Windows+NT" title="everything2.com" rel="nofollow">http://everything2.com/title/The+similarities+between+VMS+and+Windows+NT</a> [everything2.com]</htmltext>
<tokenext>They already kissed and made up over Dave Cutler making NT a little too similar to VMS .
See conclusion at and of article .
http : //everything2.com/title/The + similarities + between + VMS + and + Windows + NT [ everything2.com ]</tokentext>
<sentencetext>They already kissed and made up over Dave Cutler making NT a little too similar to VMS.
See conclusion at and of article.
http://everything2.com/title/The+similarities+between+VMS+and+Windows+NT [everything2.com]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561670</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562378</id>
	<title>Microsoft's slowness and Windows 2005</title>
	<author>Anonymous</author>
	<datestamp>1269181860000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>I love how Microsoft can come along in 2010 and with a straight face say it's about time they took multiprocessing seriously. Or say it's about time we started putting HTML5 features into our browser. And we're finally going to support the ISO audio video standard from 2002. And by the way, it's about time we let you know that our answer to the 2007 iPhone will be shipping in 2011. And look how great it is that we just got 10\% of our platform modernized off the 2001 XP version! And our office suite is just about ready to discover that the World Wide Web exists. It's like they are in a time warp.</p><p>I know they have product managers instead of product designers, and so have to crib design from the rest of the industry, necessitating them to be years behind, but on engineering stuff like multiprocessing, you expect them to at least have read the memo from Intel in 2005 about single cores not scaling and how the future was going to be 128 core chips before you know it.</p><p>I guess when you recognize that Windows Vista was really Windows 2003 and Windows 7 is really Windows 2005 then it makes some sense. It really is time for them to start taking multiprocessing seriously.</p><p>I am so glad I stopped using their products in 1999.</p></htmltext>
<tokenext>I love how Microsoft can come along in 2010 and with a straight face say it 's about time they took multiprocessing seriously .
Or say it 's about time we started putting HTML5 features into our browser .
And we 're finally going to support the ISO audio video standard from 2002 .
And by the way , it 's about time we let you know that our answer to the 2007 iPhone will be shipping in 2011 .
And look how great it is that we just got 10 \ % of our platform modernized off the 2001 XP version !
And our office suite is just about ready to discover that the World Wide Web exists .
It 's like they are in a time warp.I know they have product managers instead of product designers , and so have to crib design from the rest of the industry , necessitating them to be years behind , but on engineering stuff like multiprocessing , you expect them to at least have read the memo from Intel in 2005 about single cores not scaling and how the future was going to be 128 core chips before you know it.I guess when you recognize that Windows Vista was really Windows 2003 and Windows 7 is really Windows 2005 then it makes some sense .
It really is time for them to start taking multiprocessing seriously.I am so glad I stopped using their products in 1999 .</tokentext>
<sentencetext>I love how Microsoft can come along in 2010 and with a straight face say it's about time they took multiprocessing seriously.
Or say it's about time we started putting HTML5 features into our browser.
And we're finally going to support the ISO audio video standard from 2002.
And by the way, it's about time we let you know that our answer to the 2007 iPhone will be shipping in 2011.
And look how great it is that we just got 10\% of our platform modernized off the 2001 XP version!
And our office suite is just about ready to discover that the World Wide Web exists.
It's like they are in a time warp.I know they have product managers instead of product designers, and so have to crib design from the rest of the industry, necessitating them to be years behind, but on engineering stuff like multiprocessing, you expect them to at least have read the memo from Intel in 2005 about single cores not scaling and how the future was going to be 128 core chips before you know it.I guess when you recognize that Windows Vista was really Windows 2003 and Windows 7 is really Windows 2005 then it makes some sense.
It really is time for them to start taking multiprocessing seriously.I am so glad I stopped using their products in 1999.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563174</id>
	<title>Re:Luckily OSX is Already Has MultiCore Tech</title>
	<author>Daltorak</author>
	<datestamp>1269188460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The Microsoft Windows equivalent of Grand Central Dispatch is called <b>User-Mode Scheduling</b>, and is included with Windows 7 and Windows Server 2008 R2.</p><p><a href="http://msdn.microsoft.com/en-us/library/dd627187(VS.85).aspx" title="microsoft.com">http://msdn.microsoft.com/en-us/library/dd627187(VS.85).aspx</a> [microsoft.com]</p><p>Microsoft has also released application libraries on top of UMS to make it easier to use in certain languages.  C++, for example, has the Concurrency Runtime.  More on that here:</p><p><a href="http://msdn.microsoft.com/en-us/library/dd504870(VS.100).aspx" title="microsoft.com">http://msdn.microsoft.com/en-us/library/dd504870(VS.100).aspx</a> [microsoft.com]</p><p>GDC and UMS both let an application developer accomplish pretty much the same thing: move all into a single process with enough pre-assigned threads to cover all the cores on a system, and then work is queued up and performed on those threads.  The benefit of here is that GCD and UMS applications don't have to context-switch into and out of the kernel a bazillion times in order to do a set of parallelizable tasks.</p><p>GDR and UMS+CCR both whittle down the developer's code-writing commitment to a few lines.  It's pretty amazing stuff.</p><p><b>BUT....</b></p><p>Neither of these technologies really addresses the underlying <b>system-wide</b> problem: adding more CPU cores to a system doesn't increase performance on a linear scale like increasing the speed of the CPU.  Every time you add a core, more and more time gets spent doing resource management instead of actual work.  OS kernels invariably have locks on important resources (memory tables, for example), and while these things don't matter at all on a 2 or 4 core system, they're going to be a huge bottleneck on a 200-core system.  No general-purpose operating system on the market today... not Windows, not OS X, not even Linux... can provide a liner or near-linear performance improvement as the number of cores increase beyond 16 or so.  Not as long as there is any kind of shared resource between those cores.</p><p>By the way.... Dave Probert, who is the Microsoft engineer the Slashdot article is discussing, explained UMS in this Channel 9 video over a year ago:</p><p><a href="http://channel9.msdn.com/shows/Going+Deep/Dave-Probert-Inside-Windows-7-User-Mode-Scheduler-UMS/" title="msdn.com">http://channel9.msdn.com/shows/Going+Deep/Dave-Probert-Inside-Windows-7-User-Mode-Scheduler-UMS/</a> [msdn.com]</p></htmltext>
<tokenext>The Microsoft Windows equivalent of Grand Central Dispatch is called User-Mode Scheduling , and is included with Windows 7 and Windows Server 2008 R2.http : //msdn.microsoft.com/en-us/library/dd627187 ( VS.85 ) .aspx [ microsoft.com ] Microsoft has also released application libraries on top of UMS to make it easier to use in certain languages .
C + + , for example , has the Concurrency Runtime .
More on that here : http : //msdn.microsoft.com/en-us/library/dd504870 ( VS.100 ) .aspx [ microsoft.com ] GDC and UMS both let an application developer accomplish pretty much the same thing : move all into a single process with enough pre-assigned threads to cover all the cores on a system , and then work is queued up and performed on those threads .
The benefit of here is that GCD and UMS applications do n't have to context-switch into and out of the kernel a bazillion times in order to do a set of parallelizable tasks.GDR and UMS + CCR both whittle down the developer 's code-writing commitment to a few lines .
It 's pretty amazing stuff.BUT....Neither of these technologies really addresses the underlying system-wide problem : adding more CPU cores to a system does n't increase performance on a linear scale like increasing the speed of the CPU .
Every time you add a core , more and more time gets spent doing resource management instead of actual work .
OS kernels invariably have locks on important resources ( memory tables , for example ) , and while these things do n't matter at all on a 2 or 4 core system , they 're going to be a huge bottleneck on a 200-core system .
No general-purpose operating system on the market today... not Windows , not OS X , not even Linux... can provide a liner or near-linear performance improvement as the number of cores increase beyond 16 or so .
Not as long as there is any kind of shared resource between those cores.By the way.... Dave Probert , who is the Microsoft engineer the Slashdot article is discussing , explained UMS in this Channel 9 video over a year ago : http : //channel9.msdn.com/shows/Going + Deep/Dave-Probert-Inside-Windows-7-User-Mode-Scheduler-UMS/ [ msdn.com ]</tokentext>
<sentencetext>The Microsoft Windows equivalent of Grand Central Dispatch is called User-Mode Scheduling, and is included with Windows 7 and Windows Server 2008 R2.http://msdn.microsoft.com/en-us/library/dd627187(VS.85).aspx [microsoft.com]Microsoft has also released application libraries on top of UMS to make it easier to use in certain languages.
C++, for example, has the Concurrency Runtime.
More on that here:http://msdn.microsoft.com/en-us/library/dd504870(VS.100).aspx [microsoft.com]GDC and UMS both let an application developer accomplish pretty much the same thing: move all into a single process with enough pre-assigned threads to cover all the cores on a system, and then work is queued up and performed on those threads.
The benefit of here is that GCD and UMS applications don't have to context-switch into and out of the kernel a bazillion times in order to do a set of parallelizable tasks.GDR and UMS+CCR both whittle down the developer's code-writing commitment to a few lines.
It's pretty amazing stuff.BUT....Neither of these technologies really addresses the underlying system-wide problem: adding more CPU cores to a system doesn't increase performance on a linear scale like increasing the speed of the CPU.
Every time you add a core, more and more time gets spent doing resource management instead of actual work.
OS kernels invariably have locks on important resources (memory tables, for example), and while these things don't matter at all on a 2 or 4 core system, they're going to be a huge bottleneck on a 200-core system.
No general-purpose operating system on the market today... not Windows, not OS X, not even Linux... can provide a liner or near-linear performance improvement as the number of cores increase beyond 16 or so.
Not as long as there is any kind of shared resource between those cores.By the way.... Dave Probert, who is the Microsoft engineer the Slashdot article is discussing, explained UMS in this Channel 9 video over a year ago:http://channel9.msdn.com/shows/Going+Deep/Dave-Probert-Inside-Windows-7-User-Mode-Scheduler-UMS/ [msdn.com]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561624</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562194</id>
	<title>Re:Luckily OSX is Already Has MultiCore Tech</title>
	<author>Anonymous</author>
	<datestamp>1269180540000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>It seems you are <i>severely</i> underestimating what GCD means to the application developer. I strongly suggest you read parts 12 and 13 of <a href="http://arstechnica.com/apple/reviews/2009/08/mac-os-x-10-6.ars/12" title="arstechnica.com"> John Siracusa's excellent review</a> [arstechnica.com] very carefully. As Siracusa says,</p><p><div class="quote"><p>Those with some multithreaded programming experience may be unimpressed with the GCD. So Apple made a thread pool. Big deal. They've been around forever. But the angels are in the details. Yes, the implementation of queues and threads has an elegant simplicity, and baking it into the lowest levels of the OS really helps to lower the perceived barrier to entry, but it's the API built around blocks that makes Grand Central Dispatch so attractive to developers. Just as Time Machine was "the first backup system people will actually use," Grand Central Dispatch is poised to finally spread the heretofore dark art of asynchronous application design to all Mac OS X developers. I can't wait.</p></div></div>
	</htmltext>
<tokenext>It seems you are severely underestimating what GCD means to the application developer .
I strongly suggest you read parts 12 and 13 of John Siracusa 's excellent review [ arstechnica.com ] very carefully .
As Siracusa says,Those with some multithreaded programming experience may be unimpressed with the GCD .
So Apple made a thread pool .
Big deal .
They 've been around forever .
But the angels are in the details .
Yes , the implementation of queues and threads has an elegant simplicity , and baking it into the lowest levels of the OS really helps to lower the perceived barrier to entry , but it 's the API built around blocks that makes Grand Central Dispatch so attractive to developers .
Just as Time Machine was " the first backup system people will actually use , " Grand Central Dispatch is poised to finally spread the heretofore dark art of asynchronous application design to all Mac OS X developers .
I ca n't wait .</tokentext>
<sentencetext>It seems you are severely underestimating what GCD means to the application developer.
I strongly suggest you read parts 12 and 13 of  John Siracusa's excellent review [arstechnica.com] very carefully.
As Siracusa says,Those with some multithreaded programming experience may be unimpressed with the GCD.
So Apple made a thread pool.
Big deal.
They've been around forever.
But the angels are in the details.
Yes, the implementation of queues and threads has an elegant simplicity, and baking it into the lowest levels of the OS really helps to lower the perceived barrier to entry, but it's the API built around blocks that makes Grand Central Dispatch so attractive to developers.
Just as Time Machine was "the first backup system people will actually use," Grand Central Dispatch is poised to finally spread the heretofore dark art of asynchronous application design to all Mac OS X developers.
I can't wait.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561822</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562746</id>
	<title>Re:The problem isnt even that simple</title>
	<author>pslam</author>
	<datestamp>1269184800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>That's simply not true, and hasn't even been true since the first computers I've used (like, 1980s). Only the most basic, cheap devices use polled I/O for all hardware access. Even an ancient floppy disk peripheral has a small FIFO it can simultaneously fill while the CPU is busy doing other things. I can't understand how you can pass comment given this apparent lack of basic architecture knowledge.</htmltext>
<tokenext>That 's simply not true , and has n't even been true since the first computers I 've used ( like , 1980s ) .
Only the most basic , cheap devices use polled I/O for all hardware access .
Even an ancient floppy disk peripheral has a small FIFO it can simultaneously fill while the CPU is busy doing other things .
I ca n't understand how you can pass comment given this apparent lack of basic architecture knowledge .</tokentext>
<sentencetext>That's simply not true, and hasn't even been true since the first computers I've used (like, 1980s).
Only the most basic, cheap devices use polled I/O for all hardware access.
Even an ancient floppy disk peripheral has a small FIFO it can simultaneously fill while the CPU is busy doing other things.
I can't understand how you can pass comment given this apparent lack of basic architecture knowledge.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561542</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562124</id>
	<title>Re:I hate to say it, but...</title>
	<author>Bengie</author>
	<datestamp>1269180180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Win7 almost never waits. Not sure why you're still using XP. Heck, with Win7, I can transfer 110MB/sec with SMB and play games just fine. No more of that "what's running in the background slowing me down". Kick off a defrag and go play, let a virus scanner run in the background. I can't wait to get an SSD, I just got some white label HD in my Dell comp.</p></htmltext>
<tokenext>Win7 almost never waits .
Not sure why you 're still using XP .
Heck , with Win7 , I can transfer 110MB/sec with SMB and play games just fine .
No more of that " what 's running in the background slowing me down " .
Kick off a defrag and go play , let a virus scanner run in the background .
I ca n't wait to get an SSD , I just got some white label HD in my Dell comp .</tokentext>
<sentencetext>Win7 almost never waits.
Not sure why you're still using XP.
Heck, with Win7, I can transfer 110MB/sec with SMB and play games just fine.
No more of that "what's running in the background slowing me down".
Kick off a defrag and go play, let a virus scanner run in the background.
I can't wait to get an SSD, I just got some white label HD in my Dell comp.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561530</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561852</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>The MAZZTer</author>
	<datestamp>1269178320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Often times Explorer will hang while waiting for I/O over the network to complete.  Usually when I accidentally drag some files briefly over a folder symlinked to a network folder.  Other times when I'm just scrolling down a list of folders on a remote machine I get lots of hitching.  The drives are slow but this is really no excuse for the poor performance on THIS machine.  This is Windows 7 btw.</htmltext>
<tokenext>Often times Explorer will hang while waiting for I/O over the network to complete .
Usually when I accidentally drag some files briefly over a folder symlinked to a network folder .
Other times when I 'm just scrolling down a list of folders on a remote machine I get lots of hitching .
The drives are slow but this is really no excuse for the poor performance on THIS machine .
This is Windows 7 btw .</tokentext>
<sentencetext>Often times Explorer will hang while waiting for I/O over the network to complete.
Usually when I accidentally drag some files briefly over a folder symlinked to a network folder.
Other times when I'm just scrolling down a list of folders on a remote machine I get lots of hitching.
The drives are slow but this is really no excuse for the poor performance on THIS machine.
This is Windows 7 btw.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31567752</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>Caetel</author>
	<datestamp>1269271140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Right... an issue which was fixed 3 years ago with Windows Vista</htmltext>
<tokenext>Right... an issue which was fixed 3 years ago with Windows Vista</tokentext>
<sentencetext>Right... an issue which was fixed 3 years ago with Windows Vista</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561772</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563684</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>LordLimecat</author>
	<datestamp>1269193140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>If youre moving thousands of files over the network and youre not using xcopy, youre doing it wrong.</htmltext>
<tokenext>If youre moving thousands of files over the network and youre not using xcopy , youre doing it wrong .</tokentext>
<sentencetext>If youre moving thousands of files over the network and youre not using xcopy, youre doing it wrong.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561772</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564232</id>
	<title>Re:BeOS was doing it...</title>
	<author>Anonymous</author>
	<datestamp>1269200640000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Only 1 reference in the comments to BeOS ? I thought Slashdot was "News for nerds"...</p></htmltext>
<tokenext>Only 1 reference in the comments to BeOS ?
I thought Slashdot was " News for nerds " .. .</tokentext>
<sentencetext>Only 1 reference in the comments to BeOS ?
I thought Slashdot was "News for nerds"...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561884</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31577592</id>
	<title>People are already rethinking the OS...</title>
	<author>alexandre\_ganso</author>
	<datestamp>1269263040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>... for the multi-core era.</p><p>Too bad it is not microsoft: <a href="http://www.top500.org/stats/list/34/osfam" title="top500.org">http://www.top500.org/stats/list/34/osfam</a> [top500.org]</p></htmltext>
<tokenext>... for the multi-core era.Too bad it is not microsoft : http : //www.top500.org/stats/list/34/osfam [ top500.org ]</tokentext>
<sentencetext>... for the multi-core era.Too bad it is not microsoft: http://www.top500.org/stats/list/34/osfam [top500.org]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562030</id>
	<title>Re:Luckily OSX is Already Has MultiCore Tech</title>
	<author>Anonymous</author>
	<datestamp>1269179520000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Grand Central is a nice way to tide things over for a while, but not a satisfactory answer to the problem. I've been having the honor of interacting with some of the finest minds now working on the problem of multicore and massive parallelism, and everyone is still struggling with it. And yes, there are plenty of Macs around, and they'd be building off of Grand Dispatch if it really was a great answer to the question.</p><p>When I was last interested in GC, Apple hadn't released technical docs; I've been skimming them over just now and it seems unwieldy and just plain ugly -- it abstracts things away from the hardware, while creating a multi-level design that's much more confusing than need be. (Not to mention, they'll never make it cross-platform.)</p><p>One design I've recently been introduced to pools the actual hardware threads and uses a caller/callee hierarchical relationship for establishing the distribution of work processes. Although I have questions about that design as well, it is much cleaner than GC and far more intuitive. I think it has some small chance of leading to about as good of a solution as we'll get in the near future, whereas GC seems like a very slapped-together dead end. I'd skip the braggadocio about GC if I was you -- at least around anybody who's actively working on solutions to the problem.</p></htmltext>
<tokenext>Grand Central is a nice way to tide things over for a while , but not a satisfactory answer to the problem .
I 've been having the honor of interacting with some of the finest minds now working on the problem of multicore and massive parallelism , and everyone is still struggling with it .
And yes , there are plenty of Macs around , and they 'd be building off of Grand Dispatch if it really was a great answer to the question.When I was last interested in GC , Apple had n't released technical docs ; I 've been skimming them over just now and it seems unwieldy and just plain ugly -- it abstracts things away from the hardware , while creating a multi-level design that 's much more confusing than need be .
( Not to mention , they 'll never make it cross-platform .
) One design I 've recently been introduced to pools the actual hardware threads and uses a caller/callee hierarchical relationship for establishing the distribution of work processes .
Although I have questions about that design as well , it is much cleaner than GC and far more intuitive .
I think it has some small chance of leading to about as good of a solution as we 'll get in the near future , whereas GC seems like a very slapped-together dead end .
I 'd skip the braggadocio about GC if I was you -- at least around anybody who 's actively working on solutions to the problem .</tokentext>
<sentencetext>Grand Central is a nice way to tide things over for a while, but not a satisfactory answer to the problem.
I've been having the honor of interacting with some of the finest minds now working on the problem of multicore and massive parallelism, and everyone is still struggling with it.
And yes, there are plenty of Macs around, and they'd be building off of Grand Dispatch if it really was a great answer to the question.When I was last interested in GC, Apple hadn't released technical docs; I've been skimming them over just now and it seems unwieldy and just plain ugly -- it abstracts things away from the hardware, while creating a multi-level design that's much more confusing than need be.
(Not to mention, they'll never make it cross-platform.
)One design I've recently been introduced to pools the actual hardware threads and uses a caller/callee hierarchical relationship for establishing the distribution of work processes.
Although I have questions about that design as well, it is much cleaner than GC and far more intuitive.
I think it has some small chance of leading to about as good of a solution as we'll get in the near future, whereas GC seems like a very slapped-together dead end.
I'd skip the braggadocio about GC if I was you -- at least around anybody who's actively working on solutions to the problem.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561624</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31567666</id>
	<title>Re:The way computers operate is to blame</title>
	<author>Anonymous</author>
	<datestamp>1269270960000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>what about our blood circulatory and digestive systems.<br>they are busses that move data (to be processed food and blood) to their appropriate places<br>a normal cell cannot perform the functions of a renal cell just like a sperm cell cannot combat diseases like<br>a white bloodcell, so the analogy of every object being a cpu is wrong, they are ASIC's or DSP's at best.</p></htmltext>
<tokenext>what about our blood circulatory and digestive systems.they are busses that move data ( to be processed food and blood ) to their appropriate placesa normal cell can not perform the functions of a renal cell just like a sperm cell can not combat diseases likea white bloodcell , so the analogy of every object being a cpu is wrong , they are ASIC 's or DSP 's at best .</tokentext>
<sentencetext>what about our blood circulatory and digestive systems.they are busses that move data (to be processed food and blood) to their appropriate placesa normal cell cannot perform the functions of a renal cell just like a sperm cell cannot combat diseases likea white bloodcell, so the analogy of every object being a cpu is wrong, they are ASIC's or DSP's at best.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562014</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564602</id>
	<title>Looks like Tanenbaum will have been right after al</title>
	<author>maweki</author>
	<datestamp>1269250500000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>Looks like Tanenbaum will have been right after all,

I mean, a vast amount of cores and huge parallelism is the advent of the micro- and exokernel, isn't it? This would be the simplest way to harness the multiple cores (instead of modifying a monolithic kernel to use multiple cores)</htmltext>
<tokenext>Looks like Tanenbaum will have been right after all , I mean , a vast amount of cores and huge parallelism is the advent of the micro- and exokernel , is n't it ?
This would be the simplest way to harness the multiple cores ( instead of modifying a monolithic kernel to use multiple cores )</tokentext>
<sentencetext>Looks like Tanenbaum will have been right after all,

I mean, a vast amount of cores and huge parallelism is the advent of the micro- and exokernel, isn't it?
This would be the simplest way to harness the multiple cores (instead of modifying a monolithic kernel to use multiple cores)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564774</id>
	<title>Re:reinventing the wheel</title>
	<author>Anonymous</author>
	<datestamp>1269253500000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><p>Ok, I bet I get modded troll for this, but I so wish Windows WAS a bloated version of VMS.</p><p>It would have a distributed lock manager, decent file type support and metadata, baked in security from the ground up, a scripting language that worked, logical names, a built-in flat database engine (RDB), a layered RDBMS (RDB), a distributed file system, clustering system that works with 100+ nodes, could be spread across different physical sites, contain mixed processor architectures, could do rolling upgrades, and could have recorded uptimes of 12 years+...</p><p>Basically, Windows provides a bunch of services (win32 et al), that work suprisingly well for creating desktop applications, but can't really do most of the things that VMS can.</p><p>On the other hand, there's a Windows PC on my desk, and a VAX, Alpha and Itanium in the server room, and that's the way it should stay! (Get off my lawn).</p></htmltext>
<tokenext>Ok , I bet I get modded troll for this , but I so wish Windows WAS a bloated version of VMS.It would have a distributed lock manager , decent file type support and metadata , baked in security from the ground up , a scripting language that worked , logical names , a built-in flat database engine ( RDB ) , a layered RDBMS ( RDB ) , a distributed file system , clustering system that works with 100 + nodes , could be spread across different physical sites , contain mixed processor architectures , could do rolling upgrades , and could have recorded uptimes of 12 years + ...Basically , Windows provides a bunch of services ( win32 et al ) , that work suprisingly well for creating desktop applications , but ca n't really do most of the things that VMS can.On the other hand , there 's a Windows PC on my desk , and a VAX , Alpha and Itanium in the server room , and that 's the way it should stay !
( Get off my lawn ) .</tokentext>
<sentencetext>Ok, I bet I get modded troll for this, but I so wish Windows WAS a bloated version of VMS.It would have a distributed lock manager, decent file type support and metadata, baked in security from the ground up, a scripting language that worked, logical names, a built-in flat database engine (RDB), a layered RDBMS (RDB), a distributed file system, clustering system that works with 100+ nodes, could be spread across different physical sites, contain mixed processor architectures, could do rolling upgrades, and could have recorded uptimes of 12 years+...Basically, Windows provides a bunch of services (win32 et al), that work suprisingly well for creating desktop applications, but can't really do most of the things that VMS can.On the other hand, there's a Windows PC on my desk, and a VAX, Alpha and Itanium in the server room, and that's the way it should stay!
(Get off my lawn).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561670</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562936</id>
	<title>Microkernels</title>
	<author>Anonymous</author>
	<datestamp>1269186480000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Yay. Lets get it started!</p></htmltext>
<tokenext>Yay .
Lets get it started !</tokentext>
<sentencetext>Yay.
Lets get it started!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562140</id>
	<title>Re:I hate to say it, but...</title>
	<author>MrHanky</author>
	<datestamp>1269180300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Wrong. You don't hate to say it; as you posting history shows, you're an Apple fanboy. In actual fact, you posted that comment just to advertise how much better Mac OS X is than Windows was in 2001. I know this because that's all your comment does.</p><p>It's a fucking ad.</p></htmltext>
<tokenext>Wrong .
You do n't hate to say it ; as you posting history shows , you 're an Apple fanboy .
In actual fact , you posted that comment just to advertise how much better Mac OS X is than Windows was in 2001 .
I know this because that 's all your comment does.It 's a fucking ad .</tokentext>
<sentencetext>Wrong.
You don't hate to say it; as you posting history shows, you're an Apple fanboy.
In actual fact, you posted that comment just to advertise how much better Mac OS X is than Windows was in 2001.
I know this because that's all your comment does.It's a fucking ad.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561530</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564842</id>
	<title>laughable</title>
	<author>Tom</author>
	<datestamp>1269254880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>perhaps it's time to rethink the basic architecture of today's operating systems, suggested Dave Probert, a kernel architect within the Windows core operating systems division at Microsoft.</p></div><p>Well, perhaps it's time you stop to equate "Windows" with "today's operating systems".</p><p>Every other major OS on the planet has been moving towards multiple cores for several years, and is ready for the multi-core systems currently on and coming to the market in the coming years. All, except Windows.</p></div>
	</htmltext>
<tokenext>perhaps it 's time to rethink the basic architecture of today 's operating systems , suggested Dave Probert , a kernel architect within the Windows core operating systems division at Microsoft.Well , perhaps it 's time you stop to equate " Windows " with " today 's operating systems " .Every other major OS on the planet has been moving towards multiple cores for several years , and is ready for the multi-core systems currently on and coming to the market in the coming years .
All , except Windows .</tokentext>
<sentencetext>perhaps it's time to rethink the basic architecture of today's operating systems, suggested Dave Probert, a kernel architect within the Windows core operating systems division at Microsoft.Well, perhaps it's time you stop to equate "Windows" with "today's operating systems".Every other major OS on the planet has been moving towards multiple cores for several years, and is ready for the multi-core systems currently on and coming to the market in the coming years.
All, except Windows.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561770</id>
	<title>Re:Why?</title>
	<author>masterzora</author>
	<datestamp>1269177720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>I dunno - maybe because optimal multiprocessor scheduling is an NP-complete problem?</p></div><p>That only means we can't get an absolutely optimal solution in polynomial time.  Fortunately, we are able to get a solution arbitrarily close to optimal in polynomial time.  Find the correct balance of time vs. optimality and BAM that NP-completeness isn't really a huge concern.</p><p><div class="quote"><p>Or because concurrent computations require coordination at certain points, which is an issue that doesn't exist with single-threaded systems, and it's therefore wishful thinking to assume you'll get linear scaling as you add more cores?</p></div><p>Now you're just putting words into his mouth.  Nobody's expecting linear scaling, here!  That is an entirely different question.</p></div>
	</htmltext>
<tokenext>I dunno - maybe because optimal multiprocessor scheduling is an NP-complete problem ? That only means we ca n't get an absolutely optimal solution in polynomial time .
Fortunately , we are able to get a solution arbitrarily close to optimal in polynomial time .
Find the correct balance of time vs. optimality and BAM that NP-completeness is n't really a huge concern.Or because concurrent computations require coordination at certain points , which is an issue that does n't exist with single-threaded systems , and it 's therefore wishful thinking to assume you 'll get linear scaling as you add more cores ? Now you 're just putting words into his mouth .
Nobody 's expecting linear scaling , here !
That is an entirely different question .</tokentext>
<sentencetext>I dunno - maybe because optimal multiprocessor scheduling is an NP-complete problem?That only means we can't get an absolutely optimal solution in polynomial time.
Fortunately, we are able to get a solution arbitrarily close to optimal in polynomial time.
Find the correct balance of time vs. optimality and BAM that NP-completeness isn't really a huge concern.Or because concurrent computations require coordination at certain points, which is an issue that doesn't exist with single-threaded systems, and it's therefore wishful thinking to assume you'll get linear scaling as you add more cores?Now you're just putting words into his mouth.
Nobody's expecting linear scaling, here!
That is an entirely different question.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561556</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31569302</id>
	<title>We're waiting on file I/O!</title>
	<author>akakaak</author>
	<datestamp>1269275160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>'Why should you ever, with all this parallel hardware, ever be waiting for your computer?'</p><p>In my experience, the vast majority of waiting that I do is because of hard disk I/O, not computation. I suspect that focusing effort on the intelligent and wide-spread use of SSDs and other faster-than-hard-drive media will do more to minimize my wait time.</p></htmltext>
<tokenext>'Why should you ever , with all this parallel hardware , ever be waiting for your computer ?
'In my experience , the vast majority of waiting that I do is because of hard disk I/O , not computation .
I suspect that focusing effort on the intelligent and wide-spread use of SSDs and other faster-than-hard-drive media will do more to minimize my wait time .</tokentext>
<sentencetext>'Why should you ever, with all this parallel hardware, ever be waiting for your computer?
'In my experience, the vast majority of waiting that I do is because of hard disk I/O, not computation.
I suspect that focusing effort on the intelligent and wide-spread use of SSDs and other faster-than-hard-drive media will do more to minimize my wait time.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31567664</id>
	<title>Re:Fist post!</title>
	<author>ubercam</author>
	<datestamp>1269270960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>That's actually pretty good typing with your fists. Do you have a comically large keyboard?</p></div><p>He must have learned how to do it from Strongbad!</p></div>
	</htmltext>
<tokenext>That 's actually pretty good typing with your fists .
Do you have a comically large keyboard ? He must have learned how to do it from Strongbad !</tokentext>
<sentencetext>That's actually pretty good typing with your fists.
Do you have a comically large keyboard?He must have learned how to do it from Strongbad!
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562332</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31578778</id>
	<title>Re:Luckily OSX is Already Has MultiCore Tech</title>
	<author>Anonymous</author>
	<datestamp>1269271860000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>What apple calls "blocks" are what other languages have called "closures" and had for decades. Adding closures to Objective-C isn't an interesting advance, and if Siracusa believes that's what makes GCD revolutionary I can only imagine he needs to spend less time writing articles and more time writing or debugging multi-threaded code.</p></div><p>Mmmmmm... so you read the conclusion, in particular the sentence "but it's the API built around blocks that makes Grand Central Dispatch so attractive to developers," and you thought you would fool anyone into believing that you actually read Siracusa's article?</p><p>Because your reply puts into evidence that you <i>really</i> missed the point.</p></div>
	</htmltext>
<tokenext>What apple calls " blocks " are what other languages have called " closures " and had for decades .
Adding closures to Objective-C is n't an interesting advance , and if Siracusa believes that 's what makes GCD revolutionary I can only imagine he needs to spend less time writing articles and more time writing or debugging multi-threaded code.Mmmmmm... so you read the conclusion , in particular the sentence " but it 's the API built around blocks that makes Grand Central Dispatch so attractive to developers , " and you thought you would fool anyone into believing that you actually read Siracusa 's article ? Because your reply puts into evidence that you really missed the point .</tokentext>
<sentencetext>What apple calls "blocks" are what other languages have called "closures" and had for decades.
Adding closures to Objective-C isn't an interesting advance, and if Siracusa believes that's what makes GCD revolutionary I can only imagine he needs to spend less time writing articles and more time writing or debugging multi-threaded code.Mmmmmm... so you read the conclusion, in particular the sentence "but it's the API built around blocks that makes Grand Central Dispatch so attractive to developers," and you thought you would fool anyone into believing that you actually read Siracusa's article?Because your reply puts into evidence that you really missed the point.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564264</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31568120</id>
	<title>How</title>
	<author>PhongUK</author>
	<datestamp>1269272100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>How do we test software written to be heavily parallel? For example, the games industry, we'd love to have ultra complicated path finding for 1000s of NPCs, we'd love to have bazillions of particles in a scene, but when we are presented with the task of developing a game for a quad core machine, or the PS3 (which are both about as parallel-capable as you can get at the moment), how do we write software that we can test on 16, 32 or 64 cores. TBH, we'll do what the games industry has always done and write for the target hardware.</htmltext>
<tokenext>How do we test software written to be heavily parallel ?
For example , the games industry , we 'd love to have ultra complicated path finding for 1000s of NPCs , we 'd love to have bazillions of particles in a scene , but when we are presented with the task of developing a game for a quad core machine , or the PS3 ( which are both about as parallel-capable as you can get at the moment ) , how do we write software that we can test on 16 , 32 or 64 cores .
TBH , we 'll do what the games industry has always done and write for the target hardware .</tokentext>
<sentencetext>How do we test software written to be heavily parallel?
For example, the games industry, we'd love to have ultra complicated path finding for 1000s of NPCs, we'd love to have bazillions of particles in a scene, but when we are presented with the task of developing a game for a quad core machine, or the PS3 (which are both about as parallel-capable as you can get at the moment), how do we write software that we can test on 16, 32 or 64 cores.
TBH, we'll do what the games industry has always done and write for the target hardware.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562164</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>Anonymous</author>
	<datestamp>1269180420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>One of the problems is how they implement concurrency. One approach is to have multiple threads of execution and some common area used to synchronize all these threads. Because it would lead to havoc if any thread could write to the common area, there's a locking system in place. Think of it like a key that you hang on a door. When you need to enter the room, you unlock the door then take the key with you. When you're done, you hang the key back on the door.  If you don't do it right you can have a queue form outside the door.</p><p>What happens in many cases is that a single thread is locking that common area. The system hangs while that common area is locked. In many cases this happens because of I/O. If your I/O subsystem cannot properly keep up with multiple requests you can have major bottlenecks. Now, you certainly don't want writes and reads to occur in the wrong order so you  make sure that any process that needs to perform I/O is essentially serialized. Sometimes the OS has no control over it. Maybe it's sending out requests to a piece of hardware and the hardware is just taking a long time.</p><p>But we certainly should rethink how we do certain things. Databases figured out certain ways to maintain consistency and maintain performance (in most cases). Maybe we need to approach filesystems as a big database (and there have been attempts before to do this). Imagine a filesystem containing millions of separate files. In a traditional filesystem this can cause performance nightmares, but any reasonably modern database can handle millions of rows with ease. An 'ls' in such a filesystem might use a select. A 'find . -name "foo.*.bar"' would just be a select.   Underneath this, multiple threads could work on multiple levels to return results.</p></htmltext>
<tokenext>One of the problems is how they implement concurrency .
One approach is to have multiple threads of execution and some common area used to synchronize all these threads .
Because it would lead to havoc if any thread could write to the common area , there 's a locking system in place .
Think of it like a key that you hang on a door .
When you need to enter the room , you unlock the door then take the key with you .
When you 're done , you hang the key back on the door .
If you do n't do it right you can have a queue form outside the door.What happens in many cases is that a single thread is locking that common area .
The system hangs while that common area is locked .
In many cases this happens because of I/O .
If your I/O subsystem can not properly keep up with multiple requests you can have major bottlenecks .
Now , you certainly do n't want writes and reads to occur in the wrong order so you make sure that any process that needs to perform I/O is essentially serialized .
Sometimes the OS has no control over it .
Maybe it 's sending out requests to a piece of hardware and the hardware is just taking a long time.But we certainly should rethink how we do certain things .
Databases figured out certain ways to maintain consistency and maintain performance ( in most cases ) .
Maybe we need to approach filesystems as a big database ( and there have been attempts before to do this ) .
Imagine a filesystem containing millions of separate files .
In a traditional filesystem this can cause performance nightmares , but any reasonably modern database can handle millions of rows with ease .
An 'ls ' in such a filesystem might use a select .
A 'find .
-name " foo .
* .bar " ' would just be a select .
Underneath this , multiple threads could work on multiple levels to return results .</tokentext>
<sentencetext>One of the problems is how they implement concurrency.
One approach is to have multiple threads of execution and some common area used to synchronize all these threads.
Because it would lead to havoc if any thread could write to the common area, there's a locking system in place.
Think of it like a key that you hang on a door.
When you need to enter the room, you unlock the door then take the key with you.
When you're done, you hang the key back on the door.
If you don't do it right you can have a queue form outside the door.What happens in many cases is that a single thread is locking that common area.
The system hangs while that common area is locked.
In many cases this happens because of I/O.
If your I/O subsystem cannot properly keep up with multiple requests you can have major bottlenecks.
Now, you certainly don't want writes and reads to occur in the wrong order so you  make sure that any process that needs to perform I/O is essentially serialized.
Sometimes the OS has no control over it.
Maybe it's sending out requests to a piece of hardware and the hardware is just taking a long time.But we certainly should rethink how we do certain things.
Databases figured out certain ways to maintain consistency and maintain performance (in most cases).
Maybe we need to approach filesystems as a big database (and there have been attempts before to do this).
Imagine a filesystem containing millions of separate files.
In a traditional filesystem this can cause performance nightmares, but any reasonably modern database can handle millions of rows with ease.
An 'ls' in such a filesystem might use a select.
A 'find .
-name "foo.
*.bar"' would just be a select.
Underneath this, multiple threads could work on multiple levels to return results.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561972</id>
	<title>Re:Luckily OSX is Already Has MultiCore Tech</title>
	<author>PhrostyMcByte</author>
	<datestamp>1269179100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Sort of.  It's a little higher-level and integrates better with the languages.</p><p>The real equivalents for Windows are being introduced with Visual Studio 2010 with the Concurrency Runtime for VC++ and the Parallel Framework for<nobr> <wbr></nobr>.NET.  From what I've seen of GCD, these go a few steps past it and provide a pretty extensive set of operations that easily differentiate it from simple thread pooling.</p></htmltext>
<tokenext>Sort of .
It 's a little higher-level and integrates better with the languages.The real equivalents for Windows are being introduced with Visual Studio 2010 with the Concurrency Runtime for VC + + and the Parallel Framework for .NET .
From what I 've seen of GCD , these go a few steps past it and provide a pretty extensive set of operations that easily differentiate it from simple thread pooling .</tokentext>
<sentencetext>Sort of.
It's a little higher-level and integrates better with the languages.The real equivalents for Windows are being introduced with Visual Studio 2010 with the Concurrency Runtime for VC++ and the Parallel Framework for .NET.
From what I've seen of GCD, these go a few steps past it and provide a pretty extensive set of operations that easily differentiate it from simple thread pooling.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561822</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31565920</id>
	<title>Anonymous Coward</title>
	<author>Anonymous</author>
	<datestamp>1269266160000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>http://www.barrelfish.org/</p></htmltext>
<tokenext>http : //www.barrelfish.org/</tokentext>
<sentencetext>http://www.barrelfish.org/</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562332</id>
	<title>Re:Fist post!</title>
	<author>omfgnosis</author>
	<datestamp>1269181500000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext><p>That's actually pretty good typing with your fists. Do you have a comically large keyboard?</p></htmltext>
<tokenext>That 's actually pretty good typing with your fists .
Do you have a comically large keyboard ?</tokentext>
<sentencetext>That's actually pretty good typing with your fists.
Do you have a comically large keyboard?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561506</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562014</id>
	<title>The way computers operate is to blame</title>
	<author>master\_p</author>
	<datestamp>1269179400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The real reason behind the problem is that the way a computer operates is totally inappropriate for parallelism. The concept of data moving through a bus to a proccessing core is totally at odds with parallelism.<br>We do see tremendous parallelism around us. Why? Because, in the real world, there is no bus to move the data over, and there is no central core! In the real world, each object is its own cpu! If reality was like a computer, all objects would have to be moved to a special place in order to be processed!<br>If we could take a hint from nature...in our bodies, it's not data that are moved around, it's commands that travel on our "buses", i.e. our nervous system!</p></htmltext>
<tokenext>The real reason behind the problem is that the way a computer operates is totally inappropriate for parallelism .
The concept of data moving through a bus to a proccessing core is totally at odds with parallelism.We do see tremendous parallelism around us .
Why ? Because , in the real world , there is no bus to move the data over , and there is no central core !
In the real world , each object is its own cpu !
If reality was like a computer , all objects would have to be moved to a special place in order to be processed ! If we could take a hint from nature...in our bodies , it 's not data that are moved around , it 's commands that travel on our " buses " , i.e .
our nervous system !</tokentext>
<sentencetext>The real reason behind the problem is that the way a computer operates is totally inappropriate for parallelism.
The concept of data moving through a bus to a proccessing core is totally at odds with parallelism.We do see tremendous parallelism around us.
Why? Because, in the real world, there is no bus to move the data over, and there is no central core!
In the real world, each object is its own cpu!
If reality was like a computer, all objects would have to be moved to a special place in order to be processed!If we could take a hint from nature...in our bodies, it's not data that are moved around, it's commands that travel on our "buses", i.e.
our nervous system!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562636</id>
	<title>Re:reinventing the wheel</title>
	<author>Anonymous</author>
	<datestamp>1269183900000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>VMS was doing this shit (and more) 30 years ago.</htmltext>
<tokenext>VMS was doing this shit ( and more ) 30 years ago .</tokentext>
<sentencetext>VMS was doing this shit (and more) 30 years ago.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561670</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31565618</id>
	<title>Re:Microsoft's slowness and Windows 2005</title>
	<author>maweki</author>
	<datestamp>1269264840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I loved the multi-threading and parallelism in Windows ME.

Your Explorer.exe could crack and become unresponsive while your filesystem was being corrupted and that was all happening while Windows was preparing a bluescreen. I really missed that in XP. There, this would all happen one thing at a time (but you were sure as hell it would happen).
I sometimes miss this kind of certainty on my Linux desktop.</htmltext>
<tokenext>I loved the multi-threading and parallelism in Windows ME .
Your Explorer.exe could crack and become unresponsive while your filesystem was being corrupted and that was all happening while Windows was preparing a bluescreen .
I really missed that in XP .
There , this would all happen one thing at a time ( but you were sure as hell it would happen ) .
I sometimes miss this kind of certainty on my Linux desktop .</tokentext>
<sentencetext>I loved the multi-threading and parallelism in Windows ME.
Your Explorer.exe could crack and become unresponsive while your filesystem was being corrupted and that was all happening while Windows was preparing a bluescreen.
I really missed that in XP.
There, this would all happen one thing at a time (but you were sure as hell it would happen).
I sometimes miss this kind of certainty on my Linux desktop.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562378</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564360</id>
	<title>Re:reinventing the wheel</title>
	<author>RightSaidFred99</author>
	<datestamp>1269289260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Yeah, because you heard 15 years ago that NT was "just VMS" that makes your knowledge, like, \_totally\_ relevant!  Also, there is not a single person working in Microsoft's OS group who isn't twice as experienced, three times as intelligent, and doesn't make twice the money you do.  But you keep on preaching!</htmltext>
<tokenext>Yeah , because you heard 15 years ago that NT was " just VMS " that makes your knowledge , like , \ _totally \ _ relevant !
Also , there is not a single person working in Microsoft 's OS group who is n't twice as experienced , three times as intelligent , and does n't make twice the money you do .
But you keep on preaching !</tokentext>
<sentencetext>Yeah, because you heard 15 years ago that NT was "just VMS" that makes your knowledge, like, \_totally\_ relevant!
Also, there is not a single person working in Microsoft's OS group who isn't twice as experienced, three times as intelligent, and doesn't make twice the money you do.
But you keep on preaching!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561670</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564182</id>
	<title>It's not even about multiple cores</title>
	<author>macraig</author>
	<datestamp>1269200100000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>What's wrong with at least some operating systems doesn't even have anything to do with multiple cores per se.  They're simply designing the OS and its UI incorrectly, assigning the wrong priorities to events.  <b>No event should EVER supersede the ability of a user to interact and intercede with the operating system (and applications).</b>  Nothing should EVER happen to prevent a user being able to move the mouse, access the start menu, etc., yet this still happens in both Windows and Linux distributions.  That's a fucked-up set of priorities, when the user sitting in front of the damned box - who probably paid for it - gets second billing when it comes to CPU cycles.</p><p>It doesn't matter if there's one CPU core or a hundred.  It's the fundamental design priorities that are screwed up.  Hell should freeze over before a user is denied the ability to interact, intercede, or override, regardless how many cores are present.  Apparently hell has already frozen over and I just didn't get the memo?</p></htmltext>
<tokenext>What 's wrong with at least some operating systems does n't even have anything to do with multiple cores per se .
They 're simply designing the OS and its UI incorrectly , assigning the wrong priorities to events .
No event should EVER supersede the ability of a user to interact and intercede with the operating system ( and applications ) .
Nothing should EVER happen to prevent a user being able to move the mouse , access the start menu , etc. , yet this still happens in both Windows and Linux distributions .
That 's a fucked-up set of priorities , when the user sitting in front of the damned box - who probably paid for it - gets second billing when it comes to CPU cycles.It does n't matter if there 's one CPU core or a hundred .
It 's the fundamental design priorities that are screwed up .
Hell should freeze over before a user is denied the ability to interact , intercede , or override , regardless how many cores are present .
Apparently hell has already frozen over and I just did n't get the memo ?</tokentext>
<sentencetext>What's wrong with at least some operating systems doesn't even have anything to do with multiple cores per se.
They're simply designing the OS and its UI incorrectly, assigning the wrong priorities to events.
No event should EVER supersede the ability of a user to interact and intercede with the operating system (and applications).
Nothing should EVER happen to prevent a user being able to move the mouse, access the start menu, etc., yet this still happens in both Windows and Linux distributions.
That's a fucked-up set of priorities, when the user sitting in front of the damned box - who probably paid for it - gets second billing when it comes to CPU cycles.It doesn't matter if there's one CPU core or a hundred.
It's the fundamental design priorities that are screwed up.
Hell should freeze over before a user is denied the ability to interact, intercede, or override, regardless how many cores are present.
Apparently hell has already frozen over and I just didn't get the memo?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562038</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>drsmithy</author>
	<datestamp>1269179580000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p> <i>For that matter when a copy or move fails in Explorer, why can't I simply resume it once I've fixed whatever the problem is.</i>
</p><p>You can as of Vista.</p></htmltext>
<tokenext>For that matter when a copy or move fails in Explorer , why ca n't I simply resume it once I 've fixed whatever the problem is .
You can as of Vista .</tokentext>
<sentencetext> For that matter when a copy or move fails in Explorer, why can't I simply resume it once I've fixed whatever the problem is.
You can as of Vista.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562488</id>
	<title>Relevant talk</title>
	<author>slasho81</author>
	<datestamp>1269182820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><a href="http://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hickey" title="infoq.com">http://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hickey</a> [infoq.com]</htmltext>
<tokenext>http : //www.infoq.com/presentations/Are-We-There-Yet-Rich-Hickey [ infoq.com ]</tokentext>
<sentencetext>http://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hickey [infoq.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561518</id>
	<title>This is new?!</title>
	<author>DavidRawling</author>
	<datestamp>1269175800000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext>Oh please, this has been coming for years now. Why has it taken so long for the OS designers to get with the program? We've had multi-CPU servers for literally decades.</htmltext>
<tokenext>Oh please , this has been coming for years now .
Why has it taken so long for the OS designers to get with the program ?
We 've had multi-CPU servers for literally decades .</tokentext>
<sentencetext>Oh please, this has been coming for years now.
Why has it taken so long for the OS designers to get with the program?
We've had multi-CPU servers for literally decades.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564134</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>Phroggy</author>
	<datestamp>1269199140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p> For that matter when a copy or move fails in Explorer, why can't I simply resume it once I've fixed whatever the problem is.</p></div><p>Are you still running XP?  They fixed that in Vista.</p></div>
	</htmltext>
<tokenext>For that matter when a copy or move fails in Explorer , why ca n't I simply resume it once I 've fixed whatever the problem is.Are you still running XP ?
They fixed that in Vista .</tokentext>
<sentencetext> For that matter when a copy or move fails in Explorer, why can't I simply resume it once I've fixed whatever the problem is.Are you still running XP?
They fixed that in Vista.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562180</id>
	<title>Re:Luckily OSX is Already Has MultiCore Tech</title>
	<author>shutdown -p now</author>
	<datestamp>1269180480000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>The trick with GCD is that it is somewhat more high-level than a simple thread pool - it operates in terms of tasks, not threads. The difference is that tasks have explicit dependencies on other tasks - this lets scheduler be smarter about allocating cores.</p></htmltext>
<tokenext>The trick with GCD is that it is somewhat more high-level than a simple thread pool - it operates in terms of tasks , not threads .
The difference is that tasks have explicit dependencies on other tasks - this lets scheduler be smarter about allocating cores .</tokentext>
<sentencetext>The trick with GCD is that it is somewhat more high-level than a simple thread pool - it operates in terms of tasks, not threads.
The difference is that tasks have explicit dependencies on other tasks - this lets scheduler be smarter about allocating cores.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561822</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31568612</id>
	<title>Re:Microsoft's slowness and Windows 2005</title>
	<author>radish</author>
	<datestamp>1269273360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>So which OS are you using now which is so radically different from Windows in how it handles multi-core processors?</p></htmltext>
<tokenext>So which OS are you using now which is so radically different from Windows in how it handles multi-core processors ?</tokentext>
<sentencetext>So which OS are you using now which is so radically different from Windows in how it handles multi-core processors?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562378</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564054</id>
	<title>Re:Luckily OSX is Already Has MultiCore Tech</title>
	<author>Bigjeff5</author>
	<datestamp>1269197940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>This would really only work on CPUs with a few thousand cores, and even then the CPUs would need to have some very intelligent power management for cores that aren't being used, or are in use but waiting on something like I/O.</p></div><p>Most people only want to run 5-10 applications at a time at most.  By eliminating the need to divide processing time among applications, you eliminate the need for most of the supporting applications in the OS.  Think about it.  You wouldn't need thousands of cores, in a simple setup you could probably get away with around 30.  Heavy users would want 50-100 to handle applications that have been designed for real parallel programming.</p><p>We already have servers with 16 processors standard.  Higher-end virtualization systems have 30+.</p><p>All we're missing is an OS designed to divvy up processors among applications instead of divvying applications among processors.</p></div>
	</htmltext>
<tokenext>This would really only work on CPUs with a few thousand cores , and even then the CPUs would need to have some very intelligent power management for cores that are n't being used , or are in use but waiting on something like I/O.Most people only want to run 5-10 applications at a time at most .
By eliminating the need to divide processing time among applications , you eliminate the need for most of the supporting applications in the OS .
Think about it .
You would n't need thousands of cores , in a simple setup you could probably get away with around 30 .
Heavy users would want 50-100 to handle applications that have been designed for real parallel programming.We already have servers with 16 processors standard .
Higher-end virtualization systems have 30 + .All we 're missing is an OS designed to divvy up processors among applications instead of divvying applications among processors .</tokentext>
<sentencetext>This would really only work on CPUs with a few thousand cores, and even then the CPUs would need to have some very intelligent power management for cores that aren't being used, or are in use but waiting on something like I/O.Most people only want to run 5-10 applications at a time at most.
By eliminating the need to divide processing time among applications, you eliminate the need for most of the supporting applications in the OS.
Think about it.
You wouldn't need thousands of cores, in a simple setup you could probably get away with around 30.
Heavy users would want 50-100 to handle applications that have been designed for real parallel programming.We already have servers with 16 processors standard.
Higher-end virtualization systems have 30+.All we're missing is an OS designed to divvy up processors among applications instead of divvying applications among processors.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561846</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564410</id>
	<title>Its YOUR fault Microsoft.</title>
	<author>miffo.swe</author>
	<datestamp>1269289920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>"Why should you ever, with all this parallel hardware, ever be waiting for your computer?"</p><p>Well for starters its only the CPU thats parallell in most computers. Your bloated pig of an OS that takes 4 Gig to run is causing more waiting because of I/O than because of CPU tasks. Stop writing often used routines in high level languages and youll see performance gains much bigger than any hardware can bring the next couple of years.</p></htmltext>
<tokenext>" Why should you ever , with all this parallel hardware , ever be waiting for your computer ?
" Well for starters its only the CPU thats parallell in most computers .
Your bloated pig of an OS that takes 4 Gig to run is causing more waiting because of I/O than because of CPU tasks .
Stop writing often used routines in high level languages and youll see performance gains much bigger than any hardware can bring the next couple of years .</tokentext>
<sentencetext>"Why should you ever, with all this parallel hardware, ever be waiting for your computer?
"Well for starters its only the CPU thats parallell in most computers.
Your bloated pig of an OS that takes 4 Gig to run is causing more waiting because of I/O than because of CPU tasks.
Stop writing often used routines in high level languages and youll see performance gains much bigger than any hardware can bring the next couple of years.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561592</id>
	<title>Re:Fist post!</title>
	<author>Sarten-X</author>
	<datestamp>1269176280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Here you go: R r</p><p>Copy &amp; paste as needed.</p></htmltext>
<tokenext>Here you go : R rCopy &amp; paste as needed .</tokentext>
<sentencetext>Here you go: R rCopy &amp; paste as needed.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561506</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561862</id>
	<title>Re:I hate to say it, but...</title>
	<author>Grem135</author>
	<datestamp>1269178320000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>0</modscore>
	<htmltext>Wow, another Mac fanboy compairing his nice shiny new Mac to an outdated and replaced (2 times over) operating system.
I bet he will say his Ipad will out perform a netbook too.  Though the netbook can multitask, run virtually any windows app, has wifi, you can connect an external dvd and (gasp) it can be a color Ebook reader just like the ipad!!</htmltext>
<tokenext>Wow , another Mac fanboy compairing his nice shiny new Mac to an outdated and replaced ( 2 times over ) operating system .
I bet he will say his Ipad will out perform a netbook too .
Though the netbook can multitask , run virtually any windows app , has wifi , you can connect an external dvd and ( gasp ) it can be a color Ebook reader just like the ipad !
!</tokentext>
<sentencetext>Wow, another Mac fanboy compairing his nice shiny new Mac to an outdated and replaced (2 times over) operating system.
I bet he will say his Ipad will out perform a netbook too.
Though the netbook can multitask, run virtually any windows app, has wifi, you can connect an external dvd and (gasp) it can be a color Ebook reader just like the ipad!
!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561530</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561670</id>
	<title>reinventing the wheel</title>
	<author>pydev</author>
	<datestamp>1269176940000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>Microsoft should go back and read some of the literature on parallel computing from 20-30 years ago.  Machines with many cores are nothing new.  And Microsoft could have designed for it if they hadn't been busy re-implementing a bloated version of VMS.</p></htmltext>
<tokenext>Microsoft should go back and read some of the literature on parallel computing from 20-30 years ago .
Machines with many cores are nothing new .
And Microsoft could have designed for it if they had n't been busy re-implementing a bloated version of VMS .</tokentext>
<sentencetext>Microsoft should go back and read some of the literature on parallel computing from 20-30 years ago.
Machines with many cores are nothing new.
And Microsoft could have designed for it if they hadn't been busy re-implementing a bloated version of VMS.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564268</id>
	<title>Re:reinventing the wheel</title>
	<author>Anonymous</author>
	<datestamp>1269201420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>"if they hadn't been busy re-implementing a bloated version of VMS"<br>Bloated and incomplete. Your comment reminds me of a Vista launch event a couple, three years back that I went to.<br>They showed the equivalent of $ SHOW DEVICE<nobr> <wbr></nobr>/FILE as the latest... available on VMS like forever and should have been part of NT 3.51 wayback when.<br>What a waste of time.</p></htmltext>
<tokenext>" if they had n't been busy re-implementing a bloated version of VMS " Bloated and incomplete .
Your comment reminds me of a Vista launch event a couple , three years back that I went to.They showed the equivalent of $ SHOW DEVICE /FILE as the latest... available on VMS like forever and should have been part of NT 3.51 wayback when.What a waste of time .</tokentext>
<sentencetext>"if they hadn't been busy re-implementing a bloated version of VMS"Bloated and incomplete.
Your comment reminds me of a Vista launch event a couple, three years back that I went to.They showed the equivalent of $ SHOW DEVICE /FILE as the latest... available on VMS like forever and should have been part of NT 3.51 wayback when.What a waste of time.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561670</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561816</id>
	<title>Multithreading is the problem, not the answer</title>
	<author>Anonymous</author>
	<datestamp>1269177960000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p><a href="http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf" title="berkeley.edu" rel="nofollow">The Problem with Threads</a> [berkeley.edu] (UC Berkeley's Prof Edward Lee)<br><a href="http://rebelscience.blogspot.com/2008/07/how-to-solve-parallel-programming.html" title="blogspot.com" rel="nofollow">How to Solve the Parallel Programming Crisis</a> [blogspot.com]<br><a href="http://rebelscience.blogspot.com/2008/05/half-century-of-crappy-computing-repost.html" title="blogspot.com" rel="nofollow">Half a Century of Crappy Computing</a> [blogspot.com]</p><p>The computer industry will have to wake up to reality sooner or later. We must reinvent the computer; there is no getting around this. The old paradigms from the 20th century do not work anymore because they were not designed for parallel processing.</p></htmltext>
<tokenext>The Problem with Threads [ berkeley.edu ] ( UC Berkeley 's Prof Edward Lee ) How to Solve the Parallel Programming Crisis [ blogspot.com ] Half a Century of Crappy Computing [ blogspot.com ] The computer industry will have to wake up to reality sooner or later .
We must reinvent the computer ; there is no getting around this .
The old paradigms from the 20th century do not work anymore because they were not designed for parallel processing .</tokentext>
<sentencetext>The Problem with Threads [berkeley.edu] (UC Berkeley's Prof Edward Lee)How to Solve the Parallel Programming Crisis [blogspot.com]Half a Century of Crappy Computing [blogspot.com]The computer industry will have to wake up to reality sooner or later.
We must reinvent the computer; there is no getting around this.
The old paradigms from the 20th century do not work anymore because they were not designed for parallel processing.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561542</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562618</id>
	<title>One boon</title>
	<author>Jenming</author>
	<datestamp>1269183840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>that comes from neither the Program nor the OS knowing how to schedule the tasks across all CPUs/cores is that when a poorly written or otherwise CPU intensive program pegs one core the OS and other programs running suffer nearly no performance hit.</p></htmltext>
<tokenext>that comes from neither the Program nor the OS knowing how to schedule the tasks across all CPUs/cores is that when a poorly written or otherwise CPU intensive program pegs one core the OS and other programs running suffer nearly no performance hit .</tokentext>
<sentencetext>that comes from neither the Program nor the OS knowing how to schedule the tasks across all CPUs/cores is that when a poorly written or otherwise CPU intensive program pegs one core the OS and other programs running suffer nearly no performance hit.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562604</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>lennier</author>
	<datestamp>1269183600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p> If you are trying to tell me there's no way using the current abstractions to implement this I say you're mad.</p> </div><p>That's between me and my psychiatrist... but I suspect current abstractions <i>do</i> have quite a bit to do with the 'overzealous locking' problem you're describing.</p><p>The current abstractions promote the idea that sequential operation should be the norm and parallel operation is something you have to manually program for. And that manual programming (processes and threads) is done in a number of incompatible ways derived from different models, and at least one of them (threads) is really, really unsafe and practically impossible to do correctly.</p><p>So programmers being lazy and smart, do as little work as they need to, and they tend to lock things in large chunks, because that's simple and it works and can be tested roughly okay.</p><p>I believe that if we had finegrained instruction-level parallelism as the paradigm, the opposite state of affairs would hold: things would naturally be done in parallel, and it would take work to create global synchronisation. Programmers would still be lazy, but you wouldn't have the 'gah why did doing one thing lock up a bunch of others' issue - you'd have the 'gah why is my one big task still waiting on lots of little tasks to complete'.</p><p>Which I think would be a better world and slightly less broken than the current one.</p><p>Admittedly finegrained parallelism does mean the potential for a lot of speed loss in context switching if it's done dumbly, so I think making it work would require highly dynamic languages able to 'recompile' or reallocate code on the fly. Current thinking in language design doesn't favour highly introspective dynamism but is still hung up on compile-time optimisation and hard-coded typing. It would also require a complete ground-up rewrite of code merging language and OS into one abstraction, and nobody's keen to do that.</p><p>But in a perfect world, that's where I'd like to go. Parallel everything, and serial as a system-generated local optimisation.</p></div>
	</htmltext>
<tokenext>If you are trying to tell me there 's no way using the current abstractions to implement this I say you 're mad .
That 's between me and my psychiatrist... but I suspect current abstractions do have quite a bit to do with the 'overzealous locking ' problem you 're describing.The current abstractions promote the idea that sequential operation should be the norm and parallel operation is something you have to manually program for .
And that manual programming ( processes and threads ) is done in a number of incompatible ways derived from different models , and at least one of them ( threads ) is really , really unsafe and practically impossible to do correctly.So programmers being lazy and smart , do as little work as they need to , and they tend to lock things in large chunks , because that 's simple and it works and can be tested roughly okay.I believe that if we had finegrained instruction-level parallelism as the paradigm , the opposite state of affairs would hold : things would naturally be done in parallel , and it would take work to create global synchronisation .
Programmers would still be lazy , but you would n't have the 'gah why did doing one thing lock up a bunch of others ' issue - you 'd have the 'gah why is my one big task still waiting on lots of little tasks to complete'.Which I think would be a better world and slightly less broken than the current one.Admittedly finegrained parallelism does mean the potential for a lot of speed loss in context switching if it 's done dumbly , so I think making it work would require highly dynamic languages able to 'recompile ' or reallocate code on the fly .
Current thinking in language design does n't favour highly introspective dynamism but is still hung up on compile-time optimisation and hard-coded typing .
It would also require a complete ground-up rewrite of code merging language and OS into one abstraction , and nobody 's keen to do that.But in a perfect world , that 's where I 'd like to go .
Parallel everything , and serial as a system-generated local optimisation .</tokentext>
<sentencetext> If you are trying to tell me there's no way using the current abstractions to implement this I say you're mad.
That's between me and my psychiatrist... but I suspect current abstractions do have quite a bit to do with the 'overzealous locking' problem you're describing.The current abstractions promote the idea that sequential operation should be the norm and parallel operation is something you have to manually program for.
And that manual programming (processes and threads) is done in a number of incompatible ways derived from different models, and at least one of them (threads) is really, really unsafe and practically impossible to do correctly.So programmers being lazy and smart, do as little work as they need to, and they tend to lock things in large chunks, because that's simple and it works and can be tested roughly okay.I believe that if we had finegrained instruction-level parallelism as the paradigm, the opposite state of affairs would hold: things would naturally be done in parallel, and it would take work to create global synchronisation.
Programmers would still be lazy, but you wouldn't have the 'gah why did doing one thing lock up a bunch of others' issue - you'd have the 'gah why is my one big task still waiting on lots of little tasks to complete'.Which I think would be a better world and slightly less broken than the current one.Admittedly finegrained parallelism does mean the potential for a lot of speed loss in context switching if it's done dumbly, so I think making it work would require highly dynamic languages able to 'recompile' or reallocate code on the fly.
Current thinking in language design doesn't favour highly introspective dynamism but is still hung up on compile-time optimisation and hard-coded typing.
It would also require a complete ground-up rewrite of code merging language and OS into one abstraction, and nobody's keen to do that.But in a perfect world, that's where I'd like to go.
Parallel everything, and serial as a system-generated local optimisation.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564150</id>
	<title>pthread == malloc == bugs</title>
	<author>Anonymous</author>
	<datestamp>1269199500000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>App programmers do not deserve all the blame. The tools for multithreaded development are primitive and difficult to use correctly. It is difficult and expensive to make good reliable MT software.</p><p>Many years ago if I wanted a list of data I would manually malloc, memset and free data. There were lots of bugs because of the tedious management of memory. Now in C++ I write vector, or in Python I use a\_list = [] and POOF! I don't need to keep track of ANY details.</p><p>The state of multithreading libraries and tools must evolve to the point where normal (ie not VERY skilled or creative) developers can handle them without much thinking. This may require a paradigm shift similar to the object-oriented one in the eighties.</p><p>Objects and exceptions and stack unwinding seem almost obvious to us, but some people had to pave the way for their use. We need some skilled computer scientists to work with skilled library developers to make a new paradigm for developing MT apps. When these ase of yet undiscoverd (or unpopularized) paradigms have been developed, we better hope that big players have the incentive and capacity to implement them in their current systems.</p><p>I for one welcome our multithreaded overlords... when they get here.</p></htmltext>
<tokenext>App programmers do not deserve all the blame .
The tools for multithreaded development are primitive and difficult to use correctly .
It is difficult and expensive to make good reliable MT software.Many years ago if I wanted a list of data I would manually malloc , memset and free data .
There were lots of bugs because of the tedious management of memory .
Now in C + + I write vector , or in Python I use a \ _list = [ ] and POOF !
I do n't need to keep track of ANY details.The state of multithreading libraries and tools must evolve to the point where normal ( ie not VERY skilled or creative ) developers can handle them without much thinking .
This may require a paradigm shift similar to the object-oriented one in the eighties.Objects and exceptions and stack unwinding seem almost obvious to us , but some people had to pave the way for their use .
We need some skilled computer scientists to work with skilled library developers to make a new paradigm for developing MT apps .
When these ase of yet undiscoverd ( or unpopularized ) paradigms have been developed , we better hope that big players have the incentive and capacity to implement them in their current systems.I for one welcome our multithreaded overlords... when they get here .</tokentext>
<sentencetext>App programmers do not deserve all the blame.
The tools for multithreaded development are primitive and difficult to use correctly.
It is difficult and expensive to make good reliable MT software.Many years ago if I wanted a list of data I would manually malloc, memset and free data.
There were lots of bugs because of the tedious management of memory.
Now in C++ I write vector, or in Python I use a\_list = [] and POOF!
I don't need to keep track of ANY details.The state of multithreading libraries and tools must evolve to the point where normal (ie not VERY skilled or creative) developers can handle them without much thinking.
This may require a paradigm shift similar to the object-oriented one in the eighties.Objects and exceptions and stack unwinding seem almost obvious to us, but some people had to pave the way for their use.
We need some skilled computer scientists to work with skilled library developers to make a new paradigm for developing MT apps.
When these ase of yet undiscoverd (or unpopularized) paradigms have been developed, we better hope that big players have the incentive and capacity to implement them in their current systems.I for one welcome our multithreaded overlords... when they get here.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562144</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>Anonymous</author>
	<datestamp>1269180300000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>UI responsiveness issues have more to do with crappy software and flawed legacy windows APIs (ahhmm DDE) waiting for I/Os from network, naming services, disk drives..etc.  It really has nothing to do with CPU limited activities or SMP aware operations in your typical desktop workstation.</p><p>If you want your home PC to be faster your best bang for the money is to purchase more memory not faster disk drives or faster processors.</p><p>WRT HPC at some point SMP simply does not scale because of memory/cache coherency requirements...kernel codes of insert your favorite operating system here can't work around fundemental lack of available memory bandwidth.</p><p>Thats why we have NUMA and believe it or not windows supports NUMA.  Unfortunately if you thought writing SMP applications was difficult when you have to consider the locality of memory you access in your algorithms things really start to suck quick.</p><p>The only solution that I know of is to have smart programming languages which auto-parallelize and auto-optimize codes because people hate doing such things themselves.</p></htmltext>
<tokenext>UI responsiveness issues have more to do with crappy software and flawed legacy windows APIs ( ahhmm DDE ) waiting for I/Os from network , naming services , disk drives..etc .
It really has nothing to do with CPU limited activities or SMP aware operations in your typical desktop workstation.If you want your home PC to be faster your best bang for the money is to purchase more memory not faster disk drives or faster processors.WRT HPC at some point SMP simply does not scale because of memory/cache coherency requirements...kernel codes of insert your favorite operating system here ca n't work around fundemental lack of available memory bandwidth.Thats why we have NUMA and believe it or not windows supports NUMA .
Unfortunately if you thought writing SMP applications was difficult when you have to consider the locality of memory you access in your algorithms things really start to suck quick.The only solution that I know of is to have smart programming languages which auto-parallelize and auto-optimize codes because people hate doing such things themselves .</tokentext>
<sentencetext>UI responsiveness issues have more to do with crappy software and flawed legacy windows APIs (ahhmm DDE) waiting for I/Os from network, naming services, disk drives..etc.
It really has nothing to do with CPU limited activities or SMP aware operations in your typical desktop workstation.If you want your home PC to be faster your best bang for the money is to purchase more memory not faster disk drives or faster processors.WRT HPC at some point SMP simply does not scale because of memory/cache coherency requirements...kernel codes of insert your favorite operating system here can't work around fundemental lack of available memory bandwidth.Thats why we have NUMA and believe it or not windows supports NUMA.
Unfortunately if you thought writing SMP applications was difficult when you have to consider the locality of memory you access in your algorithms things really start to suck quick.The only solution that I know of is to have smart programming languages which auto-parallelize and auto-optimize codes because people hate doing such things themselves.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561624</id>
	<title>Luckily OSX is Already Has MultiCore Tech</title>
	<author>Anonymous</author>
	<datestamp>1269176520000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext>
It's called <a href="http://en.wikipedia.org/wiki/Grand\_Central\_Dispatch" title="wikipedia.org" rel="nofollow">Grand Central Dispatch.</a> [wikipedia.org]</htmltext>
<tokenext>It 's called Grand Central Dispatch .
[ wikipedia.org ]</tokentext>
<sentencetext>
It's called Grand Central Dispatch.
[wikipedia.org]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562052</id>
	<title>Re:The problem isnt even that simple</title>
	<author>lennier</author>
	<datestamp>1269179760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Hardware in today's computers is serial: You access one device, then another, then another.</p></div><p>So you don't have packets coming in/out on sound, network, multiple screens, mouse, keyboard, USB drive, webcam, and hard drive simultaneously?</p></div>
	</htmltext>
<tokenext>Hardware in today 's computers is serial : You access one device , then another , then another.So you do n't have packets coming in/out on sound , network , multiple screens , mouse , keyboard , USB drive , webcam , and hard drive simultaneously ?</tokentext>
<sentencetext>Hardware in today's computers is serial: You access one device, then another, then another.So you don't have packets coming in/out on sound, network, multiple screens, mouse, keyboard, USB drive, webcam, and hard drive simultaneously?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561542</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31580218</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>Anonymous</author>
	<datestamp>1269374820000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>this'll hang your Explorer window for a couple of solid seconds</p></div><p>Not sure what you're talking about here, I haven't seen this kind of behavior in XP for a while at least.<br>Besides, I just attempted this, and for hosts that resolve, I am not seeing any appreciable delay between the time I press enter and the time that an anonymous ftp session begins, where anonymous ftp sessions are allowed.<br>For hosts that do not allow anonymous ftp, there is some delay.<br>For hosts that do not have port 21 open, there is no delay.<br>For non-resolving hosts, there is no delay and an error message pops up.</p></div>
	</htmltext>
<tokenext>this 'll hang your Explorer window for a couple of solid secondsNot sure what you 're talking about here , I have n't seen this kind of behavior in XP for a while at least.Besides , I just attempted this , and for hosts that resolve , I am not seeing any appreciable delay between the time I press enter and the time that an anonymous ftp session begins , where anonymous ftp sessions are allowed.For hosts that do not allow anonymous ftp , there is some delay.For hosts that do not have port 21 open , there is no delay.For non-resolving hosts , there is no delay and an error message pops up .</tokentext>
<sentencetext>this'll hang your Explorer window for a couple of solid secondsNot sure what you're talking about here, I haven't seen this kind of behavior in XP for a while at least.Besides, I just attempted this, and for hosts that resolve, I am not seeing any appreciable delay between the time I press enter and the time that an anonymous ftp session begins, where anonymous ftp sessions are allowed.For hosts that do not allow anonymous ftp, there is some delay.For hosts that do not have port 21 open, there is no delay.For non-resolving hosts, there is no delay and an error message pops up.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561950</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562006</id>
	<title>McVoy's foresight</title>
	<author>Anonymous</author>
	<datestamp>1269179340000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Larry McVoy argued for a pretty fundamental shift about 7 years ago.  Think what you will of the guy and his company, I'm just saying...</p><p>Sadly, I can not find a link right now.</p></htmltext>
<tokenext>Larry McVoy argued for a pretty fundamental shift about 7 years ago .
Think what you will of the guy and his company , I 'm just saying...Sadly , I can not find a link right now .</tokentext>
<sentencetext>Larry McVoy argued for a pretty fundamental shift about 7 years ago.
Think what you will of the guy and his company, I'm just saying...Sadly, I can not find a link right now.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564098</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>Anonymous</author>
	<datestamp>1269198600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p> <i>To anyone using Windows (XP, Vista or 7) right now, go ahead and open up an Explorer window, and type in <a href="ftp://" title="ftp">ftp://</a> [ftp] followed by any url.</i>
</p><p>I just tried this on a Windows 7 PC.  An unresolvable name returns an error in a few seconds.  A resolvable name, with no FTP server on the other end, produces the wait cursor, but I can click on another drive or folder and it responds in a few seconds (other Explorer windows are instantly responsive while this happens).  A working FTP site (eg: ftp.microsoft.com) opens with a listing pretty much instantly.
</p><p>Where's the problem here ?</p></htmltext>
<tokenext>To anyone using Windows ( XP , Vista or 7 ) right now , go ahead and open up an Explorer window , and type in ftp : // [ ftp ] followed by any url .
I just tried this on a Windows 7 PC .
An unresolvable name returns an error in a few seconds .
A resolvable name , with no FTP server on the other end , produces the wait cursor , but I can click on another drive or folder and it responds in a few seconds ( other Explorer windows are instantly responsive while this happens ) .
A working FTP site ( eg : ftp.microsoft.com ) opens with a listing pretty much instantly .
Where 's the problem here ?</tokentext>
<sentencetext> To anyone using Windows (XP, Vista or 7) right now, go ahead and open up an Explorer window, and type in ftp:// [ftp] followed by any url.
I just tried this on a Windows 7 PC.
An unresolvable name returns an error in a few seconds.
A resolvable name, with no FTP server on the other end, produces the wait cursor, but I can click on another drive or folder and it responds in a few seconds (other Explorer windows are instantly responsive while this happens).
A working FTP site (eg: ftp.microsoft.com) opens with a listing pretty much instantly.
Where's the problem here ?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561950</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561726</id>
	<title>Very weak presentation</title>
	<author>gtoomey</author>
	<datestamp>1269177540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>This is a very weak talk to give at a University. Rather than talking about 'parallel programming' and adding an "It Sucks" button., I would expect a discussion on CSP <a href="http://en.wikipedia.org/wiki/Communicating\_sequential\_processes" title="wikipedia.org">http://en.wikipedia.org/wiki/Communicating\_sequential\_processes</a> [wikipedia.org] or perhaps real time hard to guarantee responsiveness.  This is the indoctrination you get when you work for Microsoft, you start spruiking low-level marketing jumbo-jumbo to a very technical audience.</htmltext>
<tokenext>This is a very weak talk to give at a University .
Rather than talking about 'parallel programming ' and adding an " It Sucks " button. , I would expect a discussion on CSP http : //en.wikipedia.org/wiki/Communicating \ _sequential \ _processes [ wikipedia.org ] or perhaps real time hard to guarantee responsiveness .
This is the indoctrination you get when you work for Microsoft , you start spruiking low-level marketing jumbo-jumbo to a very technical audience .</tokentext>
<sentencetext>This is a very weak talk to give at a University.
Rather than talking about 'parallel programming' and adding an "It Sucks" button., I would expect a discussion on CSP http://en.wikipedia.org/wiki/Communicating\_sequential\_processes [wikipedia.org] or perhaps real time hard to guarantee responsiveness.
This is the indoctrination you get when you work for Microsoft, you start spruiking low-level marketing jumbo-jumbo to a very technical audience.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563316</id>
	<title>Re:Microsoft's slowness and Windows 2005</title>
	<author>dafing</author>
	<datestamp>1269189960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>the first genuine LOL of the week, (it is monday mind you).  Thank you.</htmltext>
<tokenext>the first genuine LOL of the week , ( it is monday mind you ) .
Thank you .</tokentext>
<sentencetext>the first genuine LOL of the week, (it is monday mind you).
Thank you.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562378</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31565252</id>
	<title>You can't easily go parallel with C.</title>
	<author>master\_p</author>
	<datestamp>1269261180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>One of the reasons for not being parallel enough is that the C programming language is not easy for parallelism. It's not that it can't be done, but the language's basic purpose is at odds with the principles of parallelism. Something like Haskell is much easier to automatically parallelize, because of the lack of destructive updating, but Haskell goes to the other extreme. Something in between is required, but it's not here yet.</p></htmltext>
<tokenext>One of the reasons for not being parallel enough is that the C programming language is not easy for parallelism .
It 's not that it ca n't be done , but the language 's basic purpose is at odds with the principles of parallelism .
Something like Haskell is much easier to automatically parallelize , because of the lack of destructive updating , but Haskell goes to the other extreme .
Something in between is required , but it 's not here yet .</tokentext>
<sentencetext>One of the reasons for not being parallel enough is that the C programming language is not easy for parallelism.
It's not that it can't be done, but the language's basic purpose is at odds with the principles of parallelism.
Something like Haskell is much easier to automatically parallelize, because of the lack of destructive updating, but Haskell goes to the other extreme.
Something in between is required, but it's not here yet.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561556</id>
	<title>Why?</title>
	<author>DoofusOfDeath</author>
	<datestamp>1269176100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>Why should you ever, with all this parallel hardware, ever be waiting for your computer?</p></div></blockquote><p>I dunno - maybe because optimal multiprocessor scheduling is an NP-complete problem?  Or because concurrent computations require coordination at certain points, which is an issue that doesn't exist with single-threaded systems, and it's therefore wishful thinking to assume you'll get linear scaling as you add more cores?</p></div>
	</htmltext>
<tokenext>Why should you ever , with all this parallel hardware , ever be waiting for your computer ? I dunno - maybe because optimal multiprocessor scheduling is an NP-complete problem ?
Or because concurrent computations require coordination at certain points , which is an issue that does n't exist with single-threaded systems , and it 's therefore wishful thinking to assume you 'll get linear scaling as you add more cores ?</tokentext>
<sentencetext>Why should you ever, with all this parallel hardware, ever be waiting for your computer?I dunno - maybe because optimal multiprocessor scheduling is an NP-complete problem?
Or because concurrent computations require coordination at certain points, which is an issue that doesn't exist with single-threaded systems, and it's therefore wishful thinking to assume you'll get linear scaling as you add more cores?
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561542</id>
	<title>The problem isnt even that simple</title>
	<author>Anonymous</author>
	<datestamp>1269176040000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>The problem is that most (if not all) peripheral hardware is not parallel in many senses. Hardware in today's computers is serial: You access one device, then another, then another. There are some cases (such as a few good emulators) which use muti-threaded emulation (sound in one thread, graphics in another) but fundamentally the biggest performance kill is the final IRQs that get called to process data. The structure of modern day computers must change to take advantage of multicore systems.</p></htmltext>
<tokenext>The problem is that most ( if not all ) peripheral hardware is not parallel in many senses .
Hardware in today 's computers is serial : You access one device , then another , then another .
There are some cases ( such as a few good emulators ) which use muti-threaded emulation ( sound in one thread , graphics in another ) but fundamentally the biggest performance kill is the final IRQs that get called to process data .
The structure of modern day computers must change to take advantage of multicore systems .</tokentext>
<sentencetext>The problem is that most (if not all) peripheral hardware is not parallel in many senses.
Hardware in today's computers is serial: You access one device, then another, then another.
There are some cases (such as a few good emulators) which use muti-threaded emulation (sound in one thread, graphics in another) but fundamentally the biggest performance kill is the final IRQs that get called to process data.
The structure of modern day computers must change to take advantage of multicore systems.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562786</id>
	<title>Legacy What?</title>
	<author>Plekto</author>
	<datestamp>1269185040000</datestamp>
	<modclass>Troll</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>The key may not be in throwing more energy into refining techniques such as parallel programming, but rather rethinking the basic abstractions that make up the <b> Microsoft </b> operating systems model.</i></p><p>There.  Fixed it.</p><p>Windows is a lot like Apple's old OS 6/7/8 was - old and haggard and full of legacy cruft that just needs a complete ground-up re-write to address.  Sure, it looks pretty and runs fairly decently now, but it's plainly also at the end if its life-cycle and showing extreme signs of stress.</p><p>Apple took an enormous risk in making a clean break from the past and it seems to be working well for them.  Microsoft needs to as well.  I doubt that it will, though, as it tends to operate more like GM than anything else.  Tons of levels of bureaucracy and a general unwillingness to do serious innovation.  After all, what worked in the past should work in the future?  Right?</p><p>Let's hope that they figure it out sooner rather than later.  Or else it's going to get very very lonely at the top.</p></htmltext>
<tokenext>The key may not be in throwing more energy into refining techniques such as parallel programming , but rather rethinking the basic abstractions that make up the Microsoft operating systems model.There .
Fixed it.Windows is a lot like Apple 's old OS 6/7/8 was - old and haggard and full of legacy cruft that just needs a complete ground-up re-write to address .
Sure , it looks pretty and runs fairly decently now , but it 's plainly also at the end if its life-cycle and showing extreme signs of stress.Apple took an enormous risk in making a clean break from the past and it seems to be working well for them .
Microsoft needs to as well .
I doubt that it will , though , as it tends to operate more like GM than anything else .
Tons of levels of bureaucracy and a general unwillingness to do serious innovation .
After all , what worked in the past should work in the future ?
Right ? Let 's hope that they figure it out sooner rather than later .
Or else it 's going to get very very lonely at the top .</tokentext>
<sentencetext>The key may not be in throwing more energy into refining techniques such as parallel programming, but rather rethinking the basic abstractions that make up the  Microsoft  operating systems model.There.
Fixed it.Windows is a lot like Apple's old OS 6/7/8 was - old and haggard and full of legacy cruft that just needs a complete ground-up re-write to address.
Sure, it looks pretty and runs fairly decently now, but it's plainly also at the end if its life-cycle and showing extreme signs of stress.Apple took an enormous risk in making a clean break from the past and it seems to be working well for them.
Microsoft needs to as well.
I doubt that it will, though, as it tends to operate more like GM than anything else.
Tons of levels of bureaucracy and a general unwillingness to do serious innovation.
After all, what worked in the past should work in the future?
Right?Let's hope that they figure it out sooner rather than later.
Or else it's going to get very very lonely at the top.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564852</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>BetterThanCaesar</author>
	<datestamp>1269255060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Windows XP Service Pack 3 was released on April 21, 2008, and my explorer.exe is dated April 14, 2008. Surely there has been time to fix Explorer's behaviour during those seven years.</htmltext>
<tokenext>Windows XP Service Pack 3 was released on April 21 , 2008 , and my explorer.exe is dated April 14 , 2008 .
Surely there has been time to fix Explorer 's behaviour during those seven years .</tokentext>
<sentencetext>Windows XP Service Pack 3 was released on April 21, 2008, and my explorer.exe is dated April 14, 2008.
Surely there has been time to fix Explorer's behaviour during those seven years.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563830</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31565458</id>
	<title>Re:BeOS was doing it...</title>
	<author>dunkelfalke</author>
	<datestamp>1269263880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It was <a href="http://en.wikipedia.org/wiki/BeOS" title="wikipedia.org">Apple that killed BeOS</a> [wikipedia.org], not Microsoft.</p><blockquote><div><p>Initially designed to run on AT&amp;T Hobbit-based hardware, BeOS was later modified to run on PowerPC-based processors: first Be's own systems, later Apple, Inc.'s PowerPC Reference Platform and Common Hardware Reference Platform, with the hope that Apple would purchase or license BeOS as a replacement for its then aging Mac OS Classic.[2]  Apple CEO Gil Amelio started negotiations to buy Be Inc., but negotiations stalled when Be CEO Jean-Louis Gass&#233;e wanted $200 million; Apple was unwilling to offer any more than $125 million. Apple's board of directors decided NeXTSTEP was a better choice and purchased NeXT in 1996 for $429 million, bringing back Apple co-founder Steve Jobs.[3]  To further complicate matters for Be, Apple refused to disclose certain architectural information about its G3 line of computers--information Be deemed critical to making BeOS work on the latest Apple hardware.</p></div></blockquote></div>
	</htmltext>
<tokenext>It was Apple that killed BeOS [ wikipedia.org ] , not Microsoft.Initially designed to run on AT&amp;T Hobbit-based hardware , BeOS was later modified to run on PowerPC-based processors : first Be 's own systems , later Apple , Inc. 's PowerPC Reference Platform and Common Hardware Reference Platform , with the hope that Apple would purchase or license BeOS as a replacement for its then aging Mac OS Classic .
[ 2 ] Apple CEO Gil Amelio started negotiations to buy Be Inc. , but negotiations stalled when Be CEO Jean-Louis Gass   e wanted $ 200 million ; Apple was unwilling to offer any more than $ 125 million .
Apple 's board of directors decided NeXTSTEP was a better choice and purchased NeXT in 1996 for $ 429 million , bringing back Apple co-founder Steve Jobs .
[ 3 ] To further complicate matters for Be , Apple refused to disclose certain architectural information about its G3 line of computers--information Be deemed critical to making BeOS work on the latest Apple hardware .</tokentext>
<sentencetext>It was Apple that killed BeOS [wikipedia.org], not Microsoft.Initially designed to run on AT&amp;T Hobbit-based hardware, BeOS was later modified to run on PowerPC-based processors: first Be's own systems, later Apple, Inc.'s PowerPC Reference Platform and Common Hardware Reference Platform, with the hope that Apple would purchase or license BeOS as a replacement for its then aging Mac OS Classic.
[2]  Apple CEO Gil Amelio started negotiations to buy Be Inc., but negotiations stalled when Be CEO Jean-Louis Gassée wanted $200 million; Apple was unwilling to offer any more than $125 million.
Apple's board of directors decided NeXTSTEP was a better choice and purchased NeXT in 1996 for $429 million, bringing back Apple co-founder Steve Jobs.
[3]  To further complicate matters for Be, Apple refused to disclose certain architectural information about its G3 line of computers--information Be deemed critical to making BeOS work on the latest Apple hardware.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561884</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31579674</id>
	<title>Re:The problem: the event-driven model</title>
	<author>Anonymous</author>
	<datestamp>1269281100000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The problem lies with the programmer, not the language (BTW, for a real programmer nowadays, C is the only choice).</p></htmltext>
<tokenext>The problem lies with the programmer , not the language ( BTW , for a real programmer nowadays , C is the only choice ) .</tokentext>
<sentencetext>The problem lies with the programmer, not the language (BTW, for a real programmer nowadays, C is the only choice).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561934</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562196</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>kramulous</author>
	<datestamp>1269180540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>From the article:</p><p><div class="quote"><p>Today's typical desktop computer runs multiple programs at once, playing music while the user writes an e-mail and surfs the Web, for instance.</p><p>"Responsiveness really is king," he said. "This is what people want."</p></div><p>If you can't get those few programs right on a single core, you are going to really suck at getting them going properly on more cores.</p><p>I realise M$ don't have much control over the third party programs but if they use a core to study the instruction loading/unloading patterns, cache/memory access patterns, etc of the core doing the work and self optimise that then they may have a chance.</p></div>
	</htmltext>
<tokenext>From the article : Today 's typical desktop computer runs multiple programs at once , playing music while the user writes an e-mail and surfs the Web , for instance .
" Responsiveness really is king , " he said .
" This is what people want .
" If you ca n't get those few programs right on a single core , you are going to really suck at getting them going properly on more cores.I realise M $ do n't have much control over the third party programs but if they use a core to study the instruction loading/unloading patterns , cache/memory access patterns , etc of the core doing the work and self optimise that then they may have a chance .</tokentext>
<sentencetext>From the article:Today's typical desktop computer runs multiple programs at once, playing music while the user writes an e-mail and surfs the Web, for instance.
"Responsiveness really is king," he said.
"This is what people want.
"If you can't get those few programs right on a single core, you are going to really suck at getting them going properly on more cores.I realise M$ don't have much control over the third party programs but if they use a core to study the instruction loading/unloading patterns, cache/memory access patterns, etc of the core doing the work and self optimise that then they may have a chance.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561910</id>
	<title>Duh</title>
	<author>Waffle Iron</author>
	<datestamp>1269178680000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext><p><div class="quote"><p>Why should you ever, with all this parallel hardware, ever be waiting for your computer?'</p> </div><p>For a lot of problems, for the same reason that some guy who just married 8 brides will still have to wait for his baby.</p></div>
	</htmltext>
<tokenext>Why should you ever , with all this parallel hardware , ever be waiting for your computer ?
' For a lot of problems , for the same reason that some guy who just married 8 brides will still have to wait for his baby .</tokentext>
<sentencetext>Why should you ever, with all this parallel hardware, ever be waiting for your computer?
' For a lot of problems, for the same reason that some guy who just married 8 brides will still have to wait for his baby.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562262</id>
	<title>Re:A more basic question</title>
	<author>hitmark</author>
	<datestamp>1269180960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>the ataris at least booted from rom, not HDD. This meant when power was cut, things returned to a known good state, as nothing was in the middle of writing (unless you had requested a save from whatever program you where working in).</p><p>how iphone do it is less obvious, tho i suspect they leave much of the actual boot up to hardware, and also have said hardware return to known good states on a power loss. Also, it could be that iphone gives you the "desktop" before its actually done doing everything in the background. Heck, iirc apple have it show you a image of the apps interface that you just launched while loading the actual interface in the background, rather then some splash screen we all have come to know from some program or other. Basically, apple loves playing mind games to give the illusion things are happening one way, while its really happening some other way.</p><p>also, different hardware do different things. If windows could basically tell the bios to take a hike and do the checks itself, rather then basically have the bios first check and then have windows do the same checks a second time, things would be much faster.</p><p>all in all, much of the PC industry is held back by a "need" for backwards compatibility, to some degree or other, with that first IBM PC.</p></htmltext>
<tokenext>the ataris at least booted from rom , not HDD .
This meant when power was cut , things returned to a known good state , as nothing was in the middle of writing ( unless you had requested a save from whatever program you where working in ) .how iphone do it is less obvious , tho i suspect they leave much of the actual boot up to hardware , and also have said hardware return to known good states on a power loss .
Also , it could be that iphone gives you the " desktop " before its actually done doing everything in the background .
Heck , iirc apple have it show you a image of the apps interface that you just launched while loading the actual interface in the background , rather then some splash screen we all have come to know from some program or other .
Basically , apple loves playing mind games to give the illusion things are happening one way , while its really happening some other way.also , different hardware do different things .
If windows could basically tell the bios to take a hike and do the checks itself , rather then basically have the bios first check and then have windows do the same checks a second time , things would be much faster.all in all , much of the PC industry is held back by a " need " for backwards compatibility , to some degree or other , with that first IBM PC .</tokentext>
<sentencetext>the ataris at least booted from rom, not HDD.
This meant when power was cut, things returned to a known good state, as nothing was in the middle of writing (unless you had requested a save from whatever program you where working in).how iphone do it is less obvious, tho i suspect they leave much of the actual boot up to hardware, and also have said hardware return to known good states on a power loss.
Also, it could be that iphone gives you the "desktop" before its actually done doing everything in the background.
Heck, iirc apple have it show you a image of the apps interface that you just launched while loading the actual interface in the background, rather then some splash screen we all have come to know from some program or other.
Basically, apple loves playing mind games to give the illusion things are happening one way, while its really happening some other way.also, different hardware do different things.
If windows could basically tell the bios to take a hike and do the checks itself, rather then basically have the bios first check and then have windows do the same checks a second time, things would be much faster.all in all, much of the PC industry is held back by a "need" for backwards compatibility, to some degree or other, with that first IBM PC.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561736</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561992</id>
	<title>Re:Luckily OSX is Already Has MultiCore Tech</title>
	<author>ceoyoyo</author>
	<datestamp>1269179220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It's a system level thread pool library, along with a nice interface for sending off little bits of code to the pool.</p></htmltext>
<tokenext>It 's a system level thread pool library , along with a nice interface for sending off little bits of code to the pool .</tokentext>
<sentencetext>It's a system level thread pool library, along with a nice interface for sending off little bits of code to the pool.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561822</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562542</id>
	<title>Re:4096 processors not enough?</title>
	<author>wolrahnaes</author>
	<datestamp>1269183240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>The largest single system image I'm aware of runs Linux on a <a href="https://docs.google.com/viewer?url=http://www.sgi.com/pdfs/4007.pdf" title="google.com">4096 processor SGI machine with 17TB RAM</a> [google.com].  Maybe He means that Windows needs rework?</p></div><p>I really want to see htop or some other visual display of current CPU/RAM usage running on that.</p></div>
	</htmltext>
<tokenext>The largest single system image I 'm aware of runs Linux on a 4096 processor SGI machine with 17TB RAM [ google.com ] .
Maybe He means that Windows needs rework ? I really want to see htop or some other visual display of current CPU/RAM usage running on that .</tokentext>
<sentencetext>The largest single system image I'm aware of runs Linux on a 4096 processor SGI machine with 17TB RAM [google.com].
Maybe He means that Windows needs rework?I really want to see htop or some other visual display of current CPU/RAM usage running on that.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561798</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563118</id>
	<title>Re:The problem: the event-driven model</title>
	<author>foniksonik</author>
	<datestamp>1269187920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>On Macs you always could (well as far back as I go - OS 7.1) continue using the GUI while actions were taking place. This leads to the sometimes annoying but usually hilarious episodes where you accidentally open hundreds of files but then hit cmd+. to cancel all of your opens.. so you see a tiling of windows for a few seconds, then an untiling of windows for the next few seconds.</p><p>In Windows this could never happen because it would only open the first file - then the PC would lock up and you'd have to just quit Explorer via Task Manager (which may actually be more productive - unless you actually do want to open hundreds of files... then you're screwed in Windows, it will never happen).</p><p>.</p></htmltext>
<tokenext>On Macs you always could ( well as far back as I go - OS 7.1 ) continue using the GUI while actions were taking place .
This leads to the sometimes annoying but usually hilarious episodes where you accidentally open hundreds of files but then hit cmd + .
to cancel all of your opens.. so you see a tiling of windows for a few seconds , then an untiling of windows for the next few seconds.In Windows this could never happen because it would only open the first file - then the PC would lock up and you 'd have to just quit Explorer via Task Manager ( which may actually be more productive - unless you actually do want to open hundreds of files... then you 're screwed in Windows , it will never happen ) . .</tokentext>
<sentencetext>On Macs you always could (well as far back as I go - OS 7.1) continue using the GUI while actions were taking place.
This leads to the sometimes annoying but usually hilarious episodes where you accidentally open hundreds of files but then hit cmd+.
to cancel all of your opens.. so you see a tiling of windows for a few seconds, then an untiling of windows for the next few seconds.In Windows this could never happen because it would only open the first file - then the PC would lock up and you'd have to just quit Explorer via Task Manager (which may actually be more productive - unless you actually do want to open hundreds of files... then you're screwed in Windows, it will never happen)..</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561934</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562418</id>
	<title>Is this coming to Linux?</title>
	<author>Torrance</author>
	<datestamp>1269182160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>A little off-topic, but I was wondering if anyone knows if a GrandCentral clone may be coming to Linux?</p><p>I know FreeBSD has integrated libdispatch into their kernel, but due to licensing issues this can't be done with the Linux kernel.</p></htmltext>
<tokenext>A little off-topic , but I was wondering if anyone knows if a GrandCentral clone may be coming to Linux ? I know FreeBSD has integrated libdispatch into their kernel , but due to licensing issues this ca n't be done with the Linux kernel .</tokentext>
<sentencetext>A little off-topic, but I was wondering if anyone knows if a GrandCentral clone may be coming to Linux?I know FreeBSD has integrated libdispatch into their kernel, but due to licensing issues this can't be done with the Linux kernel.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561554</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561934</id>
	<title>The problem: the event-driven model</title>
	<author>Animats</author>
	<datestamp>1269178800000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>
A big problem is the event-driven model of most user interfaces. Almost anything that needs to be done is placed on a serial event queue, which is then processed one event at a time.  This prevents race conditions within the GUI, but at a high cost.  Both the Mac and Windows started that way, and to a considerable extent, they still work that way.  So any event which takes more time than expected stalls the whole event queue.  There are attempts to fix this by having "background" processing for events known to be slow, but you have to know which ones are going to be slow in advance.
Intermittently slow operations, like an DNS lookup or something which infrequently requires disk I/O, tend to be bottlenecks.
</p><p>
Most languages still handle concurrency very badly.  C and C++ are clueless about concurrency.  Java and C# know a little about it. Erlang and Go take it more seriously, but are intended for server-side processing.  So GUI programmers don't get much help from the language.
</p><p>In particular, in C and C++, there's locking, but there's no way within the language to <i>even talk about</i> which locks protect which data. Thus, concurrency can't be analyzed automatically. This has become a huge mess in C/C++, as more attributes ("mutable", "volatile", per-thread storage, etc.) have been bolted on to give some hints to the compiler. There's still race condition trouble between compilers and CPUs with long look-ahead and programs with heavy concurrency.
</p><p>
We need better hard-compiled languages that don't punt on concurrency issues.  C++ could potentially have been fixed, but the C++ committee is in denial about the problem; they're still in template la-la land, adding features few need and fewer will use correctly, rather than trying to do something about reliability issues.  C# is only slightly better; Microsoft Research did some work on <a href="http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=A494B5D3E175B81187E5BBF4BFFA2FB9?doi=10.1.1.3.8971&amp;rep=rep1&amp;type=pdf" title="psu.edu">"Polyphonic C#"</a> [psu.edu], but nobody seems to use that.  Yes, there are lots of obscure academic languages that address concurrency.  Few are used in the real world.
</p><p>
Game programmers have more of a clue in this area. They're used to designing software that has to keep the GUI not only updated but visually consistent, even if there are delays in getting data from some external source.  Game developers think a lot about systems which look consistent at all times, and come gracefully into synchronization with outside data sources as the data catches up.  Modern MMORPGs do far better at handling lag than browsers do.  Game developers, though, assume they own most of the available compute resources; they're not trying to minimize CPU consumption so that other work can run.  (Nor do they worry too much about not running down the battery, the other big constraint today.)
</p><p>
Incidentally, modern tools for hardware design know far more about timing and concurrency than anything in the programming world.  It's quite possible to deal with concurrency effectively.  But you pay $100,000 per year <i>per seat</i> for the software tools used in modern CPU design.</p></htmltext>
<tokenext>A big problem is the event-driven model of most user interfaces .
Almost anything that needs to be done is placed on a serial event queue , which is then processed one event at a time .
This prevents race conditions within the GUI , but at a high cost .
Both the Mac and Windows started that way , and to a considerable extent , they still work that way .
So any event which takes more time than expected stalls the whole event queue .
There are attempts to fix this by having " background " processing for events known to be slow , but you have to know which ones are going to be slow in advance .
Intermittently slow operations , like an DNS lookup or something which infrequently requires disk I/O , tend to be bottlenecks .
Most languages still handle concurrency very badly .
C and C + + are clueless about concurrency .
Java and C # know a little about it .
Erlang and Go take it more seriously , but are intended for server-side processing .
So GUI programmers do n't get much help from the language .
In particular , in C and C + + , there 's locking , but there 's no way within the language to even talk about which locks protect which data .
Thus , concurrency ca n't be analyzed automatically .
This has become a huge mess in C/C + + , as more attributes ( " mutable " , " volatile " , per-thread storage , etc .
) have been bolted on to give some hints to the compiler .
There 's still race condition trouble between compilers and CPUs with long look-ahead and programs with heavy concurrency .
We need better hard-compiled languages that do n't punt on concurrency issues .
C + + could potentially have been fixed , but the C + + committee is in denial about the problem ; they 're still in template la-la land , adding features few need and fewer will use correctly , rather than trying to do something about reliability issues .
C # is only slightly better ; Microsoft Research did some work on " Polyphonic C # " [ psu.edu ] , but nobody seems to use that .
Yes , there are lots of obscure academic languages that address concurrency .
Few are used in the real world .
Game programmers have more of a clue in this area .
They 're used to designing software that has to keep the GUI not only updated but visually consistent , even if there are delays in getting data from some external source .
Game developers think a lot about systems which look consistent at all times , and come gracefully into synchronization with outside data sources as the data catches up .
Modern MMORPGs do far better at handling lag than browsers do .
Game developers , though , assume they own most of the available compute resources ; they 're not trying to minimize CPU consumption so that other work can run .
( Nor do they worry too much about not running down the battery , the other big constraint today .
) Incidentally , modern tools for hardware design know far more about timing and concurrency than anything in the programming world .
It 's quite possible to deal with concurrency effectively .
But you pay $ 100,000 per year per seat for the software tools used in modern CPU design .</tokentext>
<sentencetext>
A big problem is the event-driven model of most user interfaces.
Almost anything that needs to be done is placed on a serial event queue, which is then processed one event at a time.
This prevents race conditions within the GUI, but at a high cost.
Both the Mac and Windows started that way, and to a considerable extent, they still work that way.
So any event which takes more time than expected stalls the whole event queue.
There are attempts to fix this by having "background" processing for events known to be slow, but you have to know which ones are going to be slow in advance.
Intermittently slow operations, like an DNS lookup or something which infrequently requires disk I/O, tend to be bottlenecks.
Most languages still handle concurrency very badly.
C and C++ are clueless about concurrency.
Java and C# know a little about it.
Erlang and Go take it more seriously, but are intended for server-side processing.
So GUI programmers don't get much help from the language.
In particular, in C and C++, there's locking, but there's no way within the language to even talk about which locks protect which data.
Thus, concurrency can't be analyzed automatically.
This has become a huge mess in C/C++, as more attributes ("mutable", "volatile", per-thread storage, etc.
) have been bolted on to give some hints to the compiler.
There's still race condition trouble between compilers and CPUs with long look-ahead and programs with heavy concurrency.
We need better hard-compiled languages that don't punt on concurrency issues.
C++ could potentially have been fixed, but the C++ committee is in denial about the problem; they're still in template la-la land, adding features few need and fewer will use correctly, rather than trying to do something about reliability issues.
C# is only slightly better; Microsoft Research did some work on "Polyphonic C#" [psu.edu], but nobody seems to use that.
Yes, there are lots of obscure academic languages that address concurrency.
Few are used in the real world.
Game programmers have more of a clue in this area.
They're used to designing software that has to keep the GUI not only updated but visually consistent, even if there are delays in getting data from some external source.
Game developers think a lot about systems which look consistent at all times, and come gracefully into synchronization with outside data sources as the data catches up.
Modern MMORPGs do far better at handling lag than browsers do.
Game developers, though, assume they own most of the available compute resources; they're not trying to minimize CPU consumption so that other work can run.
(Nor do they worry too much about not running down the battery, the other big constraint today.
)

Incidentally, modern tools for hardware design know far more about timing and concurrency than anything in the programming world.
It's quite possible to deal with concurrency effectively.
But you pay $100,000 per year per seat for the software tools used in modern CPU design.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561768</id>
	<title>Microkernel?</title>
	<author>Enry</author>
	<datestamp>1269177720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I'm not an OS designer, so I'll admit to possibly being wrong.</p><p>Doesn't a microkernel split parts of the kernel into individual processes?  In the case of a multicore system, different parts of the OS can be running on different cores at the same time.  So inserting a CD doesn't cause the display to freeze, since each are running on a different core.</p></htmltext>
<tokenext>I 'm not an OS designer , so I 'll admit to possibly being wrong.Does n't a microkernel split parts of the kernel into individual processes ?
In the case of a multicore system , different parts of the OS can be running on different cores at the same time .
So inserting a CD does n't cause the display to freeze , since each are running on a different core .</tokentext>
<sentencetext>I'm not an OS designer, so I'll admit to possibly being wrong.Doesn't a microkernel split parts of the kernel into individual processes?
In the case of a multicore system, different parts of the OS can be running on different cores at the same time.
So inserting a CD doesn't cause the display to freeze, since each are running on a different core.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561676</id>
	<title>Re:Dumb programmers</title>
	<author>Anonymous</author>
	<datestamp>1269177000000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>not true, you wait because management speed tracks stuff out the door without giving developers enough time to code things properly and management ignores developer concerns in order to get something out there now that will make money at the expense of the end user, I have been coding a long time and have seen this over and over. Management doesn't care about customers or let developers code things correctly - they only care about $$$$$$$</p></htmltext>
<tokenext>not true , you wait because management speed tracks stuff out the door without giving developers enough time to code things properly and management ignores developer concerns in order to get something out there now that will make money at the expense of the end user , I have been coding a long time and have seen this over and over .
Management does n't care about customers or let developers code things correctly - they only care about $ $ $ $ $ $ $</tokentext>
<sentencetext>not true, you wait because management speed tracks stuff out the door without giving developers enough time to code things properly and management ignores developer concerns in order to get something out there now that will make money at the expense of the end user, I have been coding a long time and have seen this over and over.
Management doesn't care about customers or let developers code things correctly - they only care about $$$$$$$</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561578</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562734</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>duguk</author>
	<datestamp>1269184680000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>For that matter when a copy or move fails in Explorer, why can't I simply resume it once I've fixed whatever the problem</p></div><p>Try <a href="http://www.ranvik.net/totalcopy/" title="ranvik.net">TotalCopy</a> [ranvik.net] which adds a copy/move in the right click menu; or <a href="http://www.codesector.com/teracopy.php" title="codesector.com">Teracopy</a> [codesector.com] commercial (free version available, supports Win7) complete replacement for the sucky Windows copy system.<br> <br>

USB/Network freezes and file copying isn't a fault of CPU cores like you say, Windows is just a sucky OS. Multicore stuff gets complicated, but this isn't going to be a panacea for Microsoft, it's another marketing opportunity.</p></div>
	</htmltext>
<tokenext>For that matter when a copy or move fails in Explorer , why ca n't I simply resume it once I 've fixed whatever the problemTry TotalCopy [ ranvik.net ] which adds a copy/move in the right click menu ; or Teracopy [ codesector.com ] commercial ( free version available , supports Win7 ) complete replacement for the sucky Windows copy system .
USB/Network freezes and file copying is n't a fault of CPU cores like you say , Windows is just a sucky OS .
Multicore stuff gets complicated , but this is n't going to be a panacea for Microsoft , it 's another marketing opportunity .</tokentext>
<sentencetext>For that matter when a copy or move fails in Explorer, why can't I simply resume it once I've fixed whatever the problemTry TotalCopy [ranvik.net] which adds a copy/move in the right click menu; or Teracopy [codesector.com] commercial (free version available, supports Win7) complete replacement for the sucky Windows copy system.
USB/Network freezes and file copying isn't a fault of CPU cores like you say, Windows is just a sucky OS.
Multicore stuff gets complicated, but this isn't going to be a panacea for Microsoft, it's another marketing opportunity.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563730</id>
	<title>Re:Microsoft's slowness and Windows 2005</title>
	<author>Techman83</author>
	<datestamp>1269193440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>If only I had mod points, this certainly isn't funny, but sadly it should be insightful.</htmltext>
<tokenext>If only I had mod points , this certainly is n't funny , but sadly it should be insightful .</tokentext>
<sentencetext>If only I had mod points, this certainly isn't funny, but sadly it should be insightful.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562378</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563804</id>
	<title>Re:Multithreading is the problem, not the answer</title>
	<author>Ken\_g6</author>
	<datestamp>1269194340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Well, that two-buffer thing is an interesting idea.  I immediately see two problems with it.</p><p>One, any instruction that writes to the second buffer "dirties" the data in the first.  So you'd need to create an algorithm that goes through the code from a given point to some length, finds all instructions that aren't dirtied by any previous instruction, and then runs them.  This, of course, is a <b>serial</b> process.  It will also run into difficulties with conditionals and loops.</p><p>And two, if the buffers are of any size, data locality starts to become an issue.  Remember, processors have a limited number of registers.  The farther out data is, the bigger the available space, but the slower the access.  This is a hardware problem.</p><p>I'd also like to point out that processors like Intel's Core series already do something kind of similar to this, having three ALUs and three memory access ports, which can all run in parallel from serial code, reordering it if necessary.  Considering that Intel engineers have given up on adding more ALUs, I'd say this process has reached its limits.</p></htmltext>
<tokenext>Well , that two-buffer thing is an interesting idea .
I immediately see two problems with it.One , any instruction that writes to the second buffer " dirties " the data in the first .
So you 'd need to create an algorithm that goes through the code from a given point to some length , finds all instructions that are n't dirtied by any previous instruction , and then runs them .
This , of course , is a serial process .
It will also run into difficulties with conditionals and loops.And two , if the buffers are of any size , data locality starts to become an issue .
Remember , processors have a limited number of registers .
The farther out data is , the bigger the available space , but the slower the access .
This is a hardware problem.I 'd also like to point out that processors like Intel 's Core series already do something kind of similar to this , having three ALUs and three memory access ports , which can all run in parallel from serial code , reordering it if necessary .
Considering that Intel engineers have given up on adding more ALUs , I 'd say this process has reached its limits .</tokentext>
<sentencetext>Well, that two-buffer thing is an interesting idea.
I immediately see two problems with it.One, any instruction that writes to the second buffer "dirties" the data in the first.
So you'd need to create an algorithm that goes through the code from a given point to some length, finds all instructions that aren't dirtied by any previous instruction, and then runs them.
This, of course, is a serial process.
It will also run into difficulties with conditionals and loops.And two, if the buffers are of any size, data locality starts to become an issue.
Remember, processors have a limited number of registers.
The farther out data is, the bigger the available space, but the slower the access.
This is a hardware problem.I'd also like to point out that processors like Intel's Core series already do something kind of similar to this, having three ALUs and three memory access ports, which can all run in parallel from serial code, reordering it if necessary.
Considering that Intel engineers have given up on adding more ALUs, I'd say this process has reached its limits.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561816</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561754</id>
	<title>Re:Luckily OSX is Already Has MultiCore Tech</title>
	<author>Anonymous</author>
	<datestamp>1269177660000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>It's called <a href="http://en.wikipedia.org/wiki/Grand\_Central\_Dispatch" title="wikipedia.org" rel="nofollow">Grand Central Dispatch.</a> [wikipedia.org]</p> </div><p>Despite having a name and a Wikipedia page. it's not doing a good enough job.</p></div>
	</htmltext>
<tokenext>It 's called Grand Central Dispatch .
[ wikipedia.org ] Despite having a name and a Wikipedia page .
it 's not doing a good enough job .</tokentext>
<sentencetext>It's called Grand Central Dispatch.
[wikipedia.org] Despite having a name and a Wikipedia page.
it's not doing a good enough job.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561624</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561878</id>
	<title>Re:Microkernel?</title>
	<author>Anonymous</author>
	<datestamp>1269178500000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>As someone who has tried to make Minix 3 suck less; microkernel doesn't imply well suited to multiprocessing, but it can help. Minix 3 for example, has disk drivers, network, filesystem etc as separate processes, but because so many operations depend on the file server, and the file server implementation is mostly synchronous and single threaded, IO will cause the entire system to appear to lock up. It would be possible to fix this of course, but it's not necessarily easy.</p></htmltext>
<tokenext>As someone who has tried to make Minix 3 suck less ; microkernel does n't imply well suited to multiprocessing , but it can help .
Minix 3 for example , has disk drivers , network , filesystem etc as separate processes , but because so many operations depend on the file server , and the file server implementation is mostly synchronous and single threaded , IO will cause the entire system to appear to lock up .
It would be possible to fix this of course , but it 's not necessarily easy .</tokentext>
<sentencetext>As someone who has tried to make Minix 3 suck less; microkernel doesn't imply well suited to multiprocessing, but it can help.
Minix 3 for example, has disk drivers, network, filesystem etc as separate processes, but because so many operations depend on the file server, and the file server implementation is mostly synchronous and single threaded, IO will cause the entire system to appear to lock up.
It would be possible to fix this of course, but it's not necessarily easy.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561768</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562988</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>Statecraftsman</author>
	<datestamp>1269186900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I use <a href="http://www.ipmsg.org/tools/fastcopy.html.en" title="ipmsg.org">FastCopy</a> [ipmsg.org] since it's free software and skips files that haven't changed. Has lots of other nice features as well but I've not seen it widely mentioned anywhere. Enjoy.</htmltext>
<tokenext>I use FastCopy [ ipmsg.org ] since it 's free software and skips files that have n't changed .
Has lots of other nice features as well but I 've not seen it widely mentioned anywhere .
Enjoy .</tokentext>
<sentencetext>I use FastCopy [ipmsg.org] since it's free software and skips files that haven't changed.
Has lots of other nice features as well but I've not seen it widely mentioned anywhere.
Enjoy.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561772</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31569696</id>
	<title>Summary/translation:</title>
	<author>swordgeek</author>
	<datestamp>1269276420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>"Windows soon hopes to catch up to where Unix was in 1993."</p></htmltext>
<tokenext>" Windows soon hopes to catch up to where Unix was in 1993 .
"</tokentext>
<sentencetext>"Windows soon hopes to catch up to where Unix was in 1993.
"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563442</id>
	<title>Re:Microsoft's slowness and Windows 2005</title>
	<author>RMS Eats Toejam</author>
	<datestamp>1269191160000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>I am so glad I stopped using their products in 1999.</p></div><p>But you are still an asshole 11 years later!  What gives?</p></div>
	</htmltext>
<tokenext>I am so glad I stopped using their products in 1999.But you are still an asshole 11 years later !
What gives ?</tokentext>
<sentencetext>I am so glad I stopped using their products in 1999.But you are still an asshole 11 years later!
What gives?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562378</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561548</id>
	<title>That kernel architect</title>
	<author>Anonymous</author>
	<datestamp>1269176100000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext><p>is Probertly right.</p></htmltext>
<tokenext>is Probertly right .</tokentext>
<sentencetext>is Probertly right.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564844</id>
	<title>Re:The problem: the event-driven model</title>
	<author>LingNoi</author>
	<datestamp>1269254940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You could (from a user perspective) increase the illusion of page load times by rendering the stuff that the browser determines is on every page. For example, slashdot pages have the same structure on almost every page. I can't see a reason why a browser (after a few page views) couldn't evaluate which parts of a site aren't dynamic and render those while the real page actually loads up. With HTML5 you could extend this further by just changing the data in certain tags.</p><p>You might have problems with pages which are completely different but it's worth giving it a try and see the results of using a browser that can do this.</p></htmltext>
<tokenext>You could ( from a user perspective ) increase the illusion of page load times by rendering the stuff that the browser determines is on every page .
For example , slashdot pages have the same structure on almost every page .
I ca n't see a reason why a browser ( after a few page views ) could n't evaluate which parts of a site are n't dynamic and render those while the real page actually loads up .
With HTML5 you could extend this further by just changing the data in certain tags.You might have problems with pages which are completely different but it 's worth giving it a try and see the results of using a browser that can do this .</tokentext>
<sentencetext>You could (from a user perspective) increase the illusion of page load times by rendering the stuff that the browser determines is on every page.
For example, slashdot pages have the same structure on almost every page.
I can't see a reason why a browser (after a few page views) couldn't evaluate which parts of a site aren't dynamic and render those while the real page actually loads up.
With HTML5 you could extend this further by just changing the data in certain tags.You might have problems with pages which are completely different but it's worth giving it a try and see the results of using a browser that can do this.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562574</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561950</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>Kenz0r</author>
	<datestamp>1269178920000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext>I wish I could mod you higher than +5, you just summed up some of the things that bother me most about the OS that is somehow still the most popular desktop OS in the world.<br> <br>
To anyone using Windows (XP, Vista or 7) right now, go ahead and open up an Explorer window, and type in <a href="ftp://" title="ftp">ftp://</a> [ftp] followed by any url.<br>Even when it's a name that obviously won't resolve, or an ip of your very own local network of a machine that just doesn't exist, this'll hang your Explorer window for a couple of solid seconds. If you're a truly patient person, try doing that with a name that does resolve, like <a href="ftp://microsoft.com" title="microsoft.com">ftp://microsoft.com</a> [microsoft.com] . Better yet, try stopping it.... say goodbye to your explorer.exe<nobr> <wbr></nobr>.<br> <br>This is one of the worst user experiences possible, all for a mundane task like using ftp. And this has been present in Windows for what, a decade?</htmltext>
<tokenext>I wish I could mod you higher than + 5 , you just summed up some of the things that bother me most about the OS that is somehow still the most popular desktop OS in the world .
To anyone using Windows ( XP , Vista or 7 ) right now , go ahead and open up an Explorer window , and type in ftp : // [ ftp ] followed by any url.Even when it 's a name that obviously wo n't resolve , or an ip of your very own local network of a machine that just does n't exist , this 'll hang your Explorer window for a couple of solid seconds .
If you 're a truly patient person , try doing that with a name that does resolve , like ftp : //microsoft.com [ microsoft.com ] .
Better yet , try stopping it.... say goodbye to your explorer.exe .
This is one of the worst user experiences possible , all for a mundane task like using ftp .
And this has been present in Windows for what , a decade ?</tokentext>
<sentencetext>I wish I could mod you higher than +5, you just summed up some of the things that bother me most about the OS that is somehow still the most popular desktop OS in the world.
To anyone using Windows (XP, Vista or 7) right now, go ahead and open up an Explorer window, and type in ftp:// [ftp] followed by any url.Even when it's a name that obviously won't resolve, or an ip of your very own local network of a machine that just doesn't exist, this'll hang your Explorer window for a couple of solid seconds.
If you're a truly patient person, try doing that with a name that does resolve, like ftp://microsoft.com [microsoft.com] .
Better yet, try stopping it.... say goodbye to your explorer.exe .
This is one of the worst user experiences possible, all for a mundane task like using ftp.
And this has been present in Windows for what, a decade?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561772</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>Threni</author>
	<datestamp>1269177780000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>Windows explorer sucks.  It always just abandons copies after a fail - even if you're moving thousands of files over a network.  Yes, you're left wondering which files did/didn't make it.  It's actually easier to sometimes copy all the files you want to shift locally, then move the copy, so that you can resume after a fail. It's laughable you have to do this, however.</p><p>But it's not a concurrency issue, and neither, really, are the first 2 problems you mention.  They're also down to Windows Explorer sucking.</p></htmltext>
<tokenext>Windows explorer sucks .
It always just abandons copies after a fail - even if you 're moving thousands of files over a network .
Yes , you 're left wondering which files did/did n't make it .
It 's actually easier to sometimes copy all the files you want to shift locally , then move the copy , so that you can resume after a fail .
It 's laughable you have to do this , however.But it 's not a concurrency issue , and neither , really , are the first 2 problems you mention .
They 're also down to Windows Explorer sucking .</tokentext>
<sentencetext>Windows explorer sucks.
It always just abandons copies after a fail - even if you're moving thousands of files over a network.
Yes, you're left wondering which files did/didn't make it.
It's actually easier to sometimes copy all the files you want to shift locally, then move the copy, so that you can resume after a fail.
It's laughable you have to do this, however.But it's not a concurrency issue, and neither, really, are the first 2 problems you mention.
They're also down to Windows Explorer sucking.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564278</id>
	<title>Re:The problem: the event-driven model</title>
	<author>IamTheRealMike</author>
	<datestamp>1269201540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>In particular, in C and C++, there's locking, but there's no way within the language to even talk about which locks protect which data.</p></div> </blockquote><p>That's true of standard C++. However GCC has <a href="http://gcc.gnu.org/wiki/ThreadSafetyAnnotation" title="gnu.org">thread safety annotations</a> [gnu.org]. We use them at work, they're pretty handy.</p></div>
	</htmltext>
<tokenext>In particular , in C and C + + , there 's locking , but there 's no way within the language to even talk about which locks protect which data .
That 's true of standard C + + .
However GCC has thread safety annotations [ gnu.org ] .
We use them at work , they 're pretty handy .</tokentext>
<sentencetext>In particular, in C and C++, there's locking, but there's no way within the language to even talk about which locks protect which data.
That's true of standard C++.
However GCC has thread safety annotations [gnu.org].
We use them at work, they're pretty handy.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561934</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564522</id>
	<title>Re:Luckily OSX is Already Has MultiCore Tech</title>
	<author>RocketRabbit</author>
	<datestamp>1269248700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Yes, but GCD can send tasks to your GPU or any other processing resource that can handle it and is sufficiently free enough at the moment.</p></htmltext>
<tokenext>Yes , but GCD can send tasks to your GPU or any other processing resource that can handle it and is sufficiently free enough at the moment .</tokentext>
<sentencetext>Yes, but GCD can send tasks to your GPU or any other processing resource that can handle it and is sufficiently free enough at the moment.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561822</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564118</id>
	<title>Nothing to see here</title>
	<author>Low Ranked Craig</author>
	<datestamp>1269198900000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext>Please move along</htmltext>
<tokenext>Please move along</tokentext>
<sentencetext>Please move along</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563534</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>Anonymous</author>
	<datestamp>1269191940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><i>some of the things that bother me most about the OS</i></p><p>Blame goes to Microsoft Management for that one</p><p><i>that is somehow still the most popular desktop OS in the world.</i></p><p>Credit goes to Microsoft Legal for that one there.</p><p>Glad to help clarify things for you.</p></htmltext>
<tokenext>some of the things that bother me most about the OSBlame goes to Microsoft Management for that onethat is somehow still the most popular desktop OS in the world.Credit goes to Microsoft Legal for that one there.Glad to help clarify things for you .</tokentext>
<sentencetext>some of the things that bother me most about the OSBlame goes to Microsoft Management for that onethat is somehow still the most popular desktop OS in the world.Credit goes to Microsoft Legal for that one there.Glad to help clarify things for you.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561950</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562600</id>
	<title>Seriously?</title>
	<author>thetartanavenger</author>
	<datestamp>1269183600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I seriously have to ask, what is this guy on? Of course moving to multicore machines requires an OS rework. Frankly even windows has already been reworked to support this, and will continue to evolve in ways that prove beneficial. This is how development works, you gain a better understanding of the problem and then change things for the theoretical better then investigate then next holdup.</p><p><div class="quote"><p>Why should you ever, with all this parallel hardware, ever be waiting for your computer?</p></div><p>Processing takes time. Chucking multiple cores at a problem doesn't magically make this time disappear. There are always physical limitations with the hardware available. Most of the delays you see nowadays in consumer applications (and others) are not from a lack of processing power, but instead from poor memory speed. For a processor to be able to do any real work it has to load all of the information from the hard drive into memory. Only then does the power of a single core come into play, and for a surprisingly short period of time. Even then accessing memory is an extremely slow operation, as information has to be brought into the caches, and written back/through to memory. The moment you start adding multiple cores on top of this, you suddenly start getting coherence issues between the caches, where one cache writes to data that is shared between the cores.</p><p><div class="quote"><p>The OS could assign an application a CPU and some memory, and the program itself, using metadata generated by the compiler, would best know how to use these resources</p></div><p>Is he suggesting that you have as many CPU's as you do programs, each with their own high-speed caches. CPU's in general sit there idling for a large percentage of the time, and that's with multiprogramming already in place. Also, the caches are the most expensive form of memory, are consumers going to pay that price, for something where they'll still just have to wait for IO anyway? This just sounds like an extremely large waste of resources. The last part of that statement is also how things already work. Has this guy not heard of OpenMP before? Granted for the time being people are expected to include this metadata themselves, but this is an area of computing being highly researched, attempting to automate this process as much as possible. Most compilers already do this up to the point of static analysis, and many are gaining new abilities such as speculation to go further.</p><p><div class="quote"><p>To get the full benefit from multiple cores, developers need to use parallel programming techniques. It remains a difficult discipline to master and hasn't been used much, outside of specialized scientific programs such as climate simulators.</p> </div><p>It is difficult, but it's getting easier. Many programmers learnt to program in a sequential imperative way, it takes time to break out of these habits. It is also a discovery process as we don't entirely know how to make programs in parallel. Languages are adapting to aide this process (many functional languages for example) but they each have their own issues and limitations. These techniques are being widely used, but much of the problem is that consumer level programs don't actually have much parallelism to them. It becomes much more obvious for scientific programs as they are inherently parallel. Again, compilers are making great strides in automating this process for the programmer.</p><p><div class="quote"><p>You don't want to wait for Microsoft Word to get started because the antivirus program chose that moment to start scanning all your files. Most OSes have some priority scheduling to avoid these bottlenecks, but they are still crude</p></div><p>So think about the specifics of how the hardware itself is going to react to what you're asking and work on a better scheduling mechanism?</p><p><div class="quote"><p>In this approach, the operating system would no longer resemble the kernel mode of today's OSes, but rather act more like a hypervisor. A concept from virtualization, a hypervisor acts as a layer between the virtual machine and the actual hardware</p></div><p>Again, is this not what already happens? I am a little low on knowledge here so please correct me if I'm wrong, but I picture the a modern kernel and a hypervisor as being much the same thing, just that one is at a much lower level than the other. Modern SMP kernels should be splitting tasks across multiple processors with the kernel only coming into play when it really needs to. The kernel should also be fully capable of working on any core independently (beyond cache coherence). Is this not the case?
</p><p>Dislaimer: I am a PhD student in the compiler/architecture area, currently working on practical methods to automating the dectection of parallelism, with specific attention to speculative parallelism. Any constructive criticisms are very welcome and appreciated.</p></div>
	</htmltext>
<tokenext>I seriously have to ask , what is this guy on ?
Of course moving to multicore machines requires an OS rework .
Frankly even windows has already been reworked to support this , and will continue to evolve in ways that prove beneficial .
This is how development works , you gain a better understanding of the problem and then change things for the theoretical better then investigate then next holdup.Why should you ever , with all this parallel hardware , ever be waiting for your computer ? Processing takes time .
Chucking multiple cores at a problem does n't magically make this time disappear .
There are always physical limitations with the hardware available .
Most of the delays you see nowadays in consumer applications ( and others ) are not from a lack of processing power , but instead from poor memory speed .
For a processor to be able to do any real work it has to load all of the information from the hard drive into memory .
Only then does the power of a single core come into play , and for a surprisingly short period of time .
Even then accessing memory is an extremely slow operation , as information has to be brought into the caches , and written back/through to memory .
The moment you start adding multiple cores on top of this , you suddenly start getting coherence issues between the caches , where one cache writes to data that is shared between the cores.The OS could assign an application a CPU and some memory , and the program itself , using metadata generated by the compiler , would best know how to use these resourcesIs he suggesting that you have as many CPU 's as you do programs , each with their own high-speed caches .
CPU 's in general sit there idling for a large percentage of the time , and that 's with multiprogramming already in place .
Also , the caches are the most expensive form of memory , are consumers going to pay that price , for something where they 'll still just have to wait for IO anyway ?
This just sounds like an extremely large waste of resources .
The last part of that statement is also how things already work .
Has this guy not heard of OpenMP before ?
Granted for the time being people are expected to include this metadata themselves , but this is an area of computing being highly researched , attempting to automate this process as much as possible .
Most compilers already do this up to the point of static analysis , and many are gaining new abilities such as speculation to go further.To get the full benefit from multiple cores , developers need to use parallel programming techniques .
It remains a difficult discipline to master and has n't been used much , outside of specialized scientific programs such as climate simulators .
It is difficult , but it 's getting easier .
Many programmers learnt to program in a sequential imperative way , it takes time to break out of these habits .
It is also a discovery process as we do n't entirely know how to make programs in parallel .
Languages are adapting to aide this process ( many functional languages for example ) but they each have their own issues and limitations .
These techniques are being widely used , but much of the problem is that consumer level programs do n't actually have much parallelism to them .
It becomes much more obvious for scientific programs as they are inherently parallel .
Again , compilers are making great strides in automating this process for the programmer.You do n't want to wait for Microsoft Word to get started because the antivirus program chose that moment to start scanning all your files .
Most OSes have some priority scheduling to avoid these bottlenecks , but they are still crudeSo think about the specifics of how the hardware itself is going to react to what you 're asking and work on a better scheduling mechanism ? In this approach , the operating system would no longer resemble the kernel mode of today 's OSes , but rather act more like a hypervisor .
A concept from virtualization , a hypervisor acts as a layer between the virtual machine and the actual hardwareAgain , is this not what already happens ?
I am a little low on knowledge here so please correct me if I 'm wrong , but I picture the a modern kernel and a hypervisor as being much the same thing , just that one is at a much lower level than the other .
Modern SMP kernels should be splitting tasks across multiple processors with the kernel only coming into play when it really needs to .
The kernel should also be fully capable of working on any core independently ( beyond cache coherence ) .
Is this not the case ?
Dislaimer : I am a PhD student in the compiler/architecture area , currently working on practical methods to automating the dectection of parallelism , with specific attention to speculative parallelism .
Any constructive criticisms are very welcome and appreciated .</tokentext>
<sentencetext>I seriously have to ask, what is this guy on?
Of course moving to multicore machines requires an OS rework.
Frankly even windows has already been reworked to support this, and will continue to evolve in ways that prove beneficial.
This is how development works, you gain a better understanding of the problem and then change things for the theoretical better then investigate then next holdup.Why should you ever, with all this parallel hardware, ever be waiting for your computer?Processing takes time.
Chucking multiple cores at a problem doesn't magically make this time disappear.
There are always physical limitations with the hardware available.
Most of the delays you see nowadays in consumer applications (and others) are not from a lack of processing power, but instead from poor memory speed.
For a processor to be able to do any real work it has to load all of the information from the hard drive into memory.
Only then does the power of a single core come into play, and for a surprisingly short period of time.
Even then accessing memory is an extremely slow operation, as information has to be brought into the caches, and written back/through to memory.
The moment you start adding multiple cores on top of this, you suddenly start getting coherence issues between the caches, where one cache writes to data that is shared between the cores.The OS could assign an application a CPU and some memory, and the program itself, using metadata generated by the compiler, would best know how to use these resourcesIs he suggesting that you have as many CPU's as you do programs, each with their own high-speed caches.
CPU's in general sit there idling for a large percentage of the time, and that's with multiprogramming already in place.
Also, the caches are the most expensive form of memory, are consumers going to pay that price, for something where they'll still just have to wait for IO anyway?
This just sounds like an extremely large waste of resources.
The last part of that statement is also how things already work.
Has this guy not heard of OpenMP before?
Granted for the time being people are expected to include this metadata themselves, but this is an area of computing being highly researched, attempting to automate this process as much as possible.
Most compilers already do this up to the point of static analysis, and many are gaining new abilities such as speculation to go further.To get the full benefit from multiple cores, developers need to use parallel programming techniques.
It remains a difficult discipline to master and hasn't been used much, outside of specialized scientific programs such as climate simulators.
It is difficult, but it's getting easier.
Many programmers learnt to program in a sequential imperative way, it takes time to break out of these habits.
It is also a discovery process as we don't entirely know how to make programs in parallel.
Languages are adapting to aide this process (many functional languages for example) but they each have their own issues and limitations.
These techniques are being widely used, but much of the problem is that consumer level programs don't actually have much parallelism to them.
It becomes much more obvious for scientific programs as they are inherently parallel.
Again, compilers are making great strides in automating this process for the programmer.You don't want to wait for Microsoft Word to get started because the antivirus program chose that moment to start scanning all your files.
Most OSes have some priority scheduling to avoid these bottlenecks, but they are still crudeSo think about the specifics of how the hardware itself is going to react to what you're asking and work on a better scheduling mechanism?In this approach, the operating system would no longer resemble the kernel mode of today's OSes, but rather act more like a hypervisor.
A concept from virtualization, a hypervisor acts as a layer between the virtual machine and the actual hardwareAgain, is this not what already happens?
I am a little low on knowledge here so please correct me if I'm wrong, but I picture the a modern kernel and a hypervisor as being much the same thing, just that one is at a much lower level than the other.
Modern SMP kernels should be splitting tasks across multiple processors with the kernel only coming into play when it really needs to.
The kernel should also be fully capable of working on any core independently (beyond cache coherence).
Is this not the case?
Dislaimer: I am a PhD student in the compiler/architecture area, currently working on practical methods to automating the dectection of parallelism, with specific attention to speculative parallelism.
Any constructive criticisms are very welcome and appreciated.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561960</id>
	<title>Re:I hate to say it, but...</title>
	<author>Corporate Troll</author>
	<datestamp>1269178980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>If it truly is almost identical hardware, I'd say that your XP installation has a problem.</htmltext>
<tokenext>If it truly is almost identical hardware , I 'd say that your XP installation has a problem .</tokentext>
<sentencetext>If it truly is almost identical hardware, I'd say that your XP installation has a problem.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561530</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562716</id>
	<title>Re:Dumb programmers</title>
	<author>dkleinsc</author>
	<datestamp>1269184500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>You wait because some <b>marketing droid</b> thought it was more important to have animated menus than a <b>responsive computer</b>.</p></div><p>For software coming from proprietary land (which is what a Windows guy like the author in the original article sees as all that really counts), the programmers have little-to-no control over the feature list. So even if the programmers use the smartest, fastest algorithm that would make Knuth weep for joy, it won't help, because there will be a new checklist feature waiting in the wings to get added in.</p><p>That's also a reason for software bloat: for an upgrade to sell to the market, it has to do more than the previous version, not just do the same job as the previous version but in a better way. In open source land, if a better algorithm is created, coded, and accepted, it's in and will tend to get distributed relatively quickly. But if you're a talking about a software business, then the work to make software technically better is simply a drain on programmer time that should be going to the next upgrade that can be sold for big bucks.</p></div>
	</htmltext>
<tokenext>You wait because some marketing droid thought it was more important to have animated menus than a responsive computer.For software coming from proprietary land ( which is what a Windows guy like the author in the original article sees as all that really counts ) , the programmers have little-to-no control over the feature list .
So even if the programmers use the smartest , fastest algorithm that would make Knuth weep for joy , it wo n't help , because there will be a new checklist feature waiting in the wings to get added in.That 's also a reason for software bloat : for an upgrade to sell to the market , it has to do more than the previous version , not just do the same job as the previous version but in a better way .
In open source land , if a better algorithm is created , coded , and accepted , it 's in and will tend to get distributed relatively quickly .
But if you 're a talking about a software business , then the work to make software technically better is simply a drain on programmer time that should be going to the next upgrade that can be sold for big bucks .</tokentext>
<sentencetext>You wait because some marketing droid thought it was more important to have animated menus than a responsive computer.For software coming from proprietary land (which is what a Windows guy like the author in the original article sees as all that really counts), the programmers have little-to-no control over the feature list.
So even if the programmers use the smartest, fastest algorithm that would make Knuth weep for joy, it won't help, because there will be a new checklist feature waiting in the wings to get added in.That's also a reason for software bloat: for an upgrade to sell to the market, it has to do more than the previous version, not just do the same job as the previous version but in a better way.
In open source land, if a better algorithm is created, coded, and accepted, it's in and will tend to get distributed relatively quickly.
But if you're a talking about a software business, then the work to make software technically better is simply a drain on programmer time that should be going to the next upgrade that can be sold for big bucks.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561578</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31569220</id>
	<title>Re:Luckily OSX is Already Has MultiCore Tech</title>
	<author>Anonymous</author>
	<datestamp>1269274920000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Moreover, because it is global, you are letting the OS, which is aware of overall resource usage, schedule things with a more global view to efficiency, rather than just worrying about how you want to schedule things with respect to your own application.</p></htmltext>
<tokenext>Moreover , because it is global , you are letting the OS , which is aware of overall resource usage , schedule things with a more global view to efficiency , rather than just worrying about how you want to schedule things with respect to your own application .</tokentext>
<sentencetext>Moreover, because it is global, you are letting the OS, which is aware of overall resource usage, schedule things with a more global view to efficiency, rather than just worrying about how you want to schedule things with respect to your own application.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562180</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564100</id>
	<title>err .. not quite</title>
	<author>nilbog</author>
	<datestamp>1269198660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Let's not say that modern operating systems need to be reworked when the only one that needs to be reworked is Windows.  Don't act like everyone else has the same problem.  I'm not trying to be a fanboy here, but see this page - specifically the section marked "grand central dispatch" : <a href="http://www.apple.com/macosx/technology/" title="apple.com">http://www.apple.com/macosx/technology/</a> [apple.com]</p></htmltext>
<tokenext>Let 's not say that modern operating systems need to be reworked when the only one that needs to be reworked is Windows .
Do n't act like everyone else has the same problem .
I 'm not trying to be a fanboy here , but see this page - specifically the section marked " grand central dispatch " : http : //www.apple.com/macosx/technology/ [ apple.com ]</tokentext>
<sentencetext>Let's not say that modern operating systems need to be reworked when the only one that needs to be reworked is Windows.
Don't act like everyone else has the same problem.
I'm not trying to be a fanboy here, but see this page - specifically the section marked "grand central dispatch" : http://www.apple.com/macosx/technology/ [apple.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562534</id>
	<title>Ass Backward, Sorry</title>
	<author>Anonymous</author>
	<datestamp>1269183180000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>On the contrary, everything in software should be event driven, down to the individual instructions. That is true parallelism, the future of computing.</p><p><a href="http://rebelscience.blogspot.com/2008/05/half-century-of-crappy-computing-repost.html" title="blogspot.com" rel="nofollow">Half a Century of Crappy Computing</a> [blogspot.com]</p></htmltext>
<tokenext>On the contrary , everything in software should be event driven , down to the individual instructions .
That is true parallelism , the future of computing.Half a Century of Crappy Computing [ blogspot.com ]</tokentext>
<sentencetext>On the contrary, everything in software should be event driven, down to the individual instructions.
That is true parallelism, the future of computing.Half a Century of Crappy Computing [blogspot.com]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561934</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562072</id>
	<title>Dumb post from someone who doesn't program</title>
	<author>Anonymous</author>
	<datestamp>1269179820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>So programmers were dumb for designing an OS that was built around existing hardware and still meets the needs of most software users? Wow that was really dumb of them to design an OS that would be have been useless then and only useful to about 1\%  of the population today.

For consumer multitasking a dual core and Windows 7 is plenty responsive. The gains from this theoretical OS would be imperceptible to most.</htmltext>
<tokenext>So programmers were dumb for designing an OS that was built around existing hardware and still meets the needs of most software users ?
Wow that was really dumb of them to design an OS that would be have been useless then and only useful to about 1 \ % of the population today .
For consumer multitasking a dual core and Windows 7 is plenty responsive .
The gains from this theoretical OS would be imperceptible to most .</tokentext>
<sentencetext>So programmers were dumb for designing an OS that was built around existing hardware and still meets the needs of most software users?
Wow that was really dumb of them to design an OS that would be have been useless then and only useful to about 1\%  of the population today.
For consumer multitasking a dual core and Windows 7 is plenty responsive.
The gains from this theoretical OS would be imperceptible to most.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561578</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561736</id>
	<title>A more basic question</title>
	<author>michaelmalak</author>
	<datestamp>1269177600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I have a more basic question.<p>With computers past and present -- Atari 8-bit, Atari ST, iPhone -- with "instant on", why does Windows not have this yet?  This goes back to the <a href="http://slashdot.org/comments.pl?sid=1426425&amp;cid=29938477" title="slashdot.org">lost decade</a> [slashdot.org].  What has Microsoft been doing since XP was released?</p></htmltext>
<tokenext>I have a more basic question.With computers past and present -- Atari 8-bit , Atari ST , iPhone -- with " instant on " , why does Windows not have this yet ?
This goes back to the lost decade [ slashdot.org ] .
What has Microsoft been doing since XP was released ?</tokentext>
<sentencetext>I have a more basic question.With computers past and present -- Atari 8-bit, Atari ST, iPhone -- with "instant on", why does Windows not have this yet?
This goes back to the lost decade [slashdot.org].
What has Microsoft been doing since XP was released?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562026</id>
	<title>Clearly this is a windows issue to note...</title>
	<author>3seas</author>
	<datestamp>1269179520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>....for those buying Windows 7.</p><p>As is this not an admittance of Microsoft continued failure to properly support the hardware it runs on?</p><p>I joke at work that the reason I have to select tools twice sometimes in autocad is because the dual processors are figuring the other processor is doing it, but when I pick the tool twice, they run out of excuses and do it...mostly.</p><p>Now I know<nobr> <wbr></nobr>....its not a joke....</p></htmltext>
<tokenext>....for those buying Windows 7.As is this not an admittance of Microsoft continued failure to properly support the hardware it runs on ? I joke at work that the reason I have to select tools twice sometimes in autocad is because the dual processors are figuring the other processor is doing it , but when I pick the tool twice , they run out of excuses and do it...mostly.Now I know ....its not a joke... .</tokentext>
<sentencetext>....for those buying Windows 7.As is this not an admittance of Microsoft continued failure to properly support the hardware it runs on?I joke at work that the reason I have to select tools twice sometimes in autocad is because the dual processors are figuring the other processor is doing it, but when I pick the tool twice, they run out of excuses and do it...mostly.Now I know ....its not a joke....</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562648</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>b4dc0d3r</author>
	<datestamp>1269183960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I've been wondering for years why disk I/O makes the machine slow down.  Antivirus scans taking 5\% CPU time should not reduce the response rate of the system, because disk reads should be the bottleneck, not scanning.  But I always see a slowdown when that happens, disk access in general.  File copying doesn't see to show as much of a problem, but it should be in theory reading and writing combined.  95\% idle CPU should respond instantly, but it doesn't.</p><p>I remember watching a 233mHz box run NT 4 server, and things like zipping a file up would make the screen update so slowly you could watch each box paint.  First the outline, then the fill, then any text, and on to the next box.  It got better, as computers got faster and I assume MS fixed some aspects of it, but it's still there.</p><p>I wrote a VBS that uses WMI objects to lower the process priority for several things (I know, we're supposed to disable them from starting up, but I can't in this case).  The biggest offenders are: wmiprvse.exe, msiexec.exe, wuauclt.exe.  When they start using CPU, the interface locks.  I never knew why.</p><p>Now it makes sense.  Task switching shouldn't be a problem if you're not using all your memory - paging does add a delay, especially if you just have to fetch a little used resource from the far end of a file that's been dropped from memory due to least-recently-used swapping.  So without paging, interrupt overhead could explain it - but only if the 'system' process ignores that in its reporting - because I wouldn't even notice if CPU usage spiked when the UI became sluggish.</p><p>Then again, shouldn't the UI have highest priority?  Programs shouldn't decide what you're working on - if you click something else, it should work.  If a runaway program is doing something stupid, it shouldn't take 5 minutes to bring up task manager, or process explorer - that's why I had to run my VBScript on startup in the first place.</p><p>When I can't figure out what's using the CPU because task manager won't come up because some program is using all the CPU, that's bad.  But when I finally get task manager open and it shows 99\% idle and I still can't select a line and "End Process" because the UI is barely responding, that's terrible.  Watching task manager paint each line on a dual core 2.53 gHz notebook, with 99\% idle time, is unacceptable.</p></htmltext>
<tokenext>I 've been wondering for years why disk I/O makes the machine slow down .
Antivirus scans taking 5 \ % CPU time should not reduce the response rate of the system , because disk reads should be the bottleneck , not scanning .
But I always see a slowdown when that happens , disk access in general .
File copying does n't see to show as much of a problem , but it should be in theory reading and writing combined .
95 \ % idle CPU should respond instantly , but it does n't.I remember watching a 233mHz box run NT 4 server , and things like zipping a file up would make the screen update so slowly you could watch each box paint .
First the outline , then the fill , then any text , and on to the next box .
It got better , as computers got faster and I assume MS fixed some aspects of it , but it 's still there.I wrote a VBS that uses WMI objects to lower the process priority for several things ( I know , we 're supposed to disable them from starting up , but I ca n't in this case ) .
The biggest offenders are : wmiprvse.exe , msiexec.exe , wuauclt.exe .
When they start using CPU , the interface locks .
I never knew why.Now it makes sense .
Task switching should n't be a problem if you 're not using all your memory - paging does add a delay , especially if you just have to fetch a little used resource from the far end of a file that 's been dropped from memory due to least-recently-used swapping .
So without paging , interrupt overhead could explain it - but only if the 'system ' process ignores that in its reporting - because I would n't even notice if CPU usage spiked when the UI became sluggish.Then again , should n't the UI have highest priority ?
Programs should n't decide what you 're working on - if you click something else , it should work .
If a runaway program is doing something stupid , it should n't take 5 minutes to bring up task manager , or process explorer - that 's why I had to run my VBScript on startup in the first place.When I ca n't figure out what 's using the CPU because task manager wo n't come up because some program is using all the CPU , that 's bad .
But when I finally get task manager open and it shows 99 \ % idle and I still ca n't select a line and " End Process " because the UI is barely responding , that 's terrible .
Watching task manager paint each line on a dual core 2.53 gHz notebook , with 99 \ % idle time , is unacceptable .</tokentext>
<sentencetext>I've been wondering for years why disk I/O makes the machine slow down.
Antivirus scans taking 5\% CPU time should not reduce the response rate of the system, because disk reads should be the bottleneck, not scanning.
But I always see a slowdown when that happens, disk access in general.
File copying doesn't see to show as much of a problem, but it should be in theory reading and writing combined.
95\% idle CPU should respond instantly, but it doesn't.I remember watching a 233mHz box run NT 4 server, and things like zipping a file up would make the screen update so slowly you could watch each box paint.
First the outline, then the fill, then any text, and on to the next box.
It got better, as computers got faster and I assume MS fixed some aspects of it, but it's still there.I wrote a VBS that uses WMI objects to lower the process priority for several things (I know, we're supposed to disable them from starting up, but I can't in this case).
The biggest offenders are: wmiprvse.exe, msiexec.exe, wuauclt.exe.
When they start using CPU, the interface locks.
I never knew why.Now it makes sense.
Task switching shouldn't be a problem if you're not using all your memory - paging does add a delay, especially if you just have to fetch a little used resource from the far end of a file that's been dropped from memory due to least-recently-used swapping.
So without paging, interrupt overhead could explain it - but only if the 'system' process ignores that in its reporting - because I wouldn't even notice if CPU usage spiked when the UI became sluggish.Then again, shouldn't the UI have highest priority?
Programs shouldn't decide what you're working on - if you click something else, it should work.
If a runaway program is doing something stupid, it shouldn't take 5 minutes to bring up task manager, or process explorer - that's why I had to run my VBScript on startup in the first place.When I can't figure out what's using the CPU because task manager won't come up because some program is using all the CPU, that's bad.
But when I finally get task manager open and it shows 99\% idle and I still can't select a line and "End Process" because the UI is barely responding, that's terrible.
Watching task manager paint each line on a dual core 2.53 gHz notebook, with 99\% idle time, is unacceptable.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31569244</id>
	<title>Re:The way computers operate is to blame</title>
	<author>drinkypoo</author>
	<datestamp>1269275040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>If we could take a hint from nature...in our bodies, it's not data that are moved around, it's commands that travel on our "buses", i.e. our nervous system!</p></div><p>We have only about half a clue as to what is going on in our brains and you want to talk about data or commands. In reality, it's neither; your entire nervous system is engaged in processing. Your "nerves" don't just relay commands. They affect what they carry.</p><p>In any case, there is a comparison here to be drawn between different computing architectures. Starting with the transputer we had the concept of an entirely non-uniform multiprocessor system with many parallel links. And today, AMD systems in common use have only certain choke points (e.g. between the processors and the peripherals, but not between processors and memory) and are NUMA as well. Further, peripheral cards in computers like RAID controllers or (obviously) graphics adapters with powerful GPUs represent an inherently distributed computing model where minimal data is sent over the bus; textures go to the video card, but from there only instructions on what to do with them will be transferred. Isn't that what you're talking about?</p></div>
	</htmltext>
<tokenext>If we could take a hint from nature...in our bodies , it 's not data that are moved around , it 's commands that travel on our " buses " , i.e .
our nervous system ! We have only about half a clue as to what is going on in our brains and you want to talk about data or commands .
In reality , it 's neither ; your entire nervous system is engaged in processing .
Your " nerves " do n't just relay commands .
They affect what they carry.In any case , there is a comparison here to be drawn between different computing architectures .
Starting with the transputer we had the concept of an entirely non-uniform multiprocessor system with many parallel links .
And today , AMD systems in common use have only certain choke points ( e.g .
between the processors and the peripherals , but not between processors and memory ) and are NUMA as well .
Further , peripheral cards in computers like RAID controllers or ( obviously ) graphics adapters with powerful GPUs represent an inherently distributed computing model where minimal data is sent over the bus ; textures go to the video card , but from there only instructions on what to do with them will be transferred .
Is n't that what you 're talking about ?</tokentext>
<sentencetext>If we could take a hint from nature...in our bodies, it's not data that are moved around, it's commands that travel on our "buses", i.e.
our nervous system!We have only about half a clue as to what is going on in our brains and you want to talk about data or commands.
In reality, it's neither; your entire nervous system is engaged in processing.
Your "nerves" don't just relay commands.
They affect what they carry.In any case, there is a comparison here to be drawn between different computing architectures.
Starting with the transputer we had the concept of an entirely non-uniform multiprocessor system with many parallel links.
And today, AMD systems in common use have only certain choke points (e.g.
between the processors and the peripherals, but not between processors and memory) and are NUMA as well.
Further, peripheral cards in computers like RAID controllers or (obviously) graphics adapters with powerful GPUs represent an inherently distributed computing model where minimal data is sent over the bus; textures go to the video card, but from there only instructions on what to do with them will be transferred.
Isn't that what you're talking about?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562014</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562558</id>
	<title>Re:Multithreading is the problem, not the answer</title>
	<author>Tablizer</author>
	<datestamp>1269183360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>More usage of RDBMS in applications could also potentially help because for the most part SQL statements don't dictate the order in which operations are done in. For a simple example, take "SELECT * FROM Foo WHERE x=3 AND y=2 AND z=7".</p><p>If x, y, and z are indexed, then each index can be scanned in parallel to find the sub-matches. If they are not indexed, then table Foo can be split into 3 groups and each group searched in parallel. (It may also help if tables are split across multiple physical drives row-wise and column-wise.)</p><p>However, it should be pointed out that existing RDBMS may not be currently optimized to take advantage of such.<br>
&nbsp; &nbsp;</p></htmltext>
<tokenext>More usage of RDBMS in applications could also potentially help because for the most part SQL statements do n't dictate the order in which operations are done in .
For a simple example , take " SELECT * FROM Foo WHERE x = 3 AND y = 2 AND z = 7 " .If x , y , and z are indexed , then each index can be scanned in parallel to find the sub-matches .
If they are not indexed , then table Foo can be split into 3 groups and each group searched in parallel .
( It may also help if tables are split across multiple physical drives row-wise and column-wise .
) However , it should be pointed out that existing RDBMS may not be currently optimized to take advantage of such .
   </tokentext>
<sentencetext>More usage of RDBMS in applications could also potentially help because for the most part SQL statements don't dictate the order in which operations are done in.
For a simple example, take "SELECT * FROM Foo WHERE x=3 AND y=2 AND z=7".If x, y, and z are indexed, then each index can be scanned in parallel to find the sub-matches.
If they are not indexed, then table Foo can be split into 3 groups and each group searched in parallel.
(It may also help if tables are split across multiple physical drives row-wise and column-wise.
)However, it should be pointed out that existing RDBMS may not be currently optimized to take advantage of such.
   </sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561816</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31569174</id>
	<title>paradigm will change a little</title>
	<author>Anonymous</author>
	<datestamp>1269274800000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><i>"Why should you ever, with all this parallel hardware, ever be waiting for your computer?"</i></p><p>Cause it still needs to goto 1 display screen likely. Single display? there's your bottleneck.</p></htmltext>
<tokenext>" Why should you ever , with all this parallel hardware , ever be waiting for your computer ?
" Cause it still needs to goto 1 display screen likely .
Single display ?
there 's your bottleneck .</tokentext>
<sentencetext>"Why should you ever, with all this parallel hardware, ever be waiting for your computer?
"Cause it still needs to goto 1 display screen likely.
Single display?
there's your bottleneck.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31565354</id>
	<title>Re:Luckily OSX is Already Has MultiCore Tech</title>
	<author>Anonymous</author>
	<datestamp>1269262380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I remain unimpressed. Do you think that the legions of assholes whose application development experience consists primarily of running some wizards in VB.net will be able to break their applications into chunks suitable for parallelization without actually slowing things down? do you even think they will bother to measure?</p><p>If you need an example, look no further than the Xilinx ISE tools, which recently came advertised as supporting multiple cores. If you can even coerce the software into running on multiple cores (the command line options in the manual are only accepted by one of the two tools for which Xilinx claims support), good luck getting it to run any faster multi-threaded than it does when single-threaded.</p></htmltext>
<tokenext>I remain unimpressed .
Do you think that the legions of assholes whose application development experience consists primarily of running some wizards in VB.net will be able to break their applications into chunks suitable for parallelization without actually slowing things down ?
do you even think they will bother to measure ? If you need an example , look no further than the Xilinx ISE tools , which recently came advertised as supporting multiple cores .
If you can even coerce the software into running on multiple cores ( the command line options in the manual are only accepted by one of the two tools for which Xilinx claims support ) , good luck getting it to run any faster multi-threaded than it does when single-threaded .</tokentext>
<sentencetext>I remain unimpressed.
Do you think that the legions of assholes whose application development experience consists primarily of running some wizards in VB.net will be able to break their applications into chunks suitable for parallelization without actually slowing things down?
do you even think they will bother to measure?If you need an example, look no further than the Xilinx ISE tools, which recently came advertised as supporting multiple cores.
If you can even coerce the software into running on multiple cores (the command line options in the manual are only accepted by one of the two tools for which Xilinx claims support), good luck getting it to run any faster multi-threaded than it does when single-threaded.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562194</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562476</id>
	<title>Re:4096 processors not enough?</title>
	<author>Anonymous</author>
	<datestamp>1269182700000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>You realize that he is talking about multi-core systems not single-core, multi-processor systems?</p></htmltext>
<tokenext>You realize that he is talking about multi-core systems not single-core , multi-processor systems ?</tokentext>
<sentencetext>You realize that he is talking about multi-core systems not single-core, multi-processor systems?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561798</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562450</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>bertok</author>
	<datestamp>1269182460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>...the implementation sucks.</p><p>Why for example does Windows Explorer decide to freeze ALL network connections when a single URN isn't quickly resolved? Why is it that when my USB drive wakes up, all explorer windows freeze? If you are trying to tell me there's no way using the current abstractions to implement this I say you're mad. For that matter when a copy or move fails in Explorer, why can't I simply resume it once I've fixed whatever the problem is. You're left piecing together what has and hasn't been moved. File requests make up a good deal of what we're waiting for. It's not the bus or the drives that are usually the limitation. It's the shitty coding. I can live with a hit at startup. I can live with delays if I have to eat into swap. But I'm sick and tired of basic functionality being missing or broken.</p></div><p>That's because most Windows applications are written in C/C++, and it would be a royal pain to make them asynchronous.</p><p>People confuse "multi-threaded" and "asynchronous". They mean almost the same thing, but there's a substantial difference in development styles. Multi-threaded (or "multi-core") is usually when an algorithm is split to run parallel across 'n' threads, asynchronous is when a program does something in the background without blocking. The former is actually quite easy in C/C++, the latter is very hard, because tracking memory ownership across a bunch of threads is a huge pain. It would help a lot if the core Windows user-space apps were re-written in a managed language like C# so that they could use asynchronous code heavily without the developers twisting their brains into knots.</p><p>What doesn't help matters is that Microsoft's multi-threading APIs and libraries have been terrible since forever, and their new push towards multi-threaded programming has been to polish the turd a little. They just don't seem to have smart guys working for them any more who can design something as complex as a general purpose multi-threading library (akin to OSX's "Grand Central Dispatch"). I've seen Microsoft's weak attempt at it in<nobr> <wbr></nobr>.NET 4, and it's just... sad.</p></div>
	</htmltext>
<tokenext>...the implementation sucks.Why for example does Windows Explorer decide to freeze ALL network connections when a single URN is n't quickly resolved ?
Why is it that when my USB drive wakes up , all explorer windows freeze ?
If you are trying to tell me there 's no way using the current abstractions to implement this I say you 're mad .
For that matter when a copy or move fails in Explorer , why ca n't I simply resume it once I 've fixed whatever the problem is .
You 're left piecing together what has and has n't been moved .
File requests make up a good deal of what we 're waiting for .
It 's not the bus or the drives that are usually the limitation .
It 's the shitty coding .
I can live with a hit at startup .
I can live with delays if I have to eat into swap .
But I 'm sick and tired of basic functionality being missing or broken.That 's because most Windows applications are written in C/C + + , and it would be a royal pain to make them asynchronous.People confuse " multi-threaded " and " asynchronous " .
They mean almost the same thing , but there 's a substantial difference in development styles .
Multi-threaded ( or " multi-core " ) is usually when an algorithm is split to run parallel across 'n ' threads , asynchronous is when a program does something in the background without blocking .
The former is actually quite easy in C/C + + , the latter is very hard , because tracking memory ownership across a bunch of threads is a huge pain .
It would help a lot if the core Windows user-space apps were re-written in a managed language like C # so that they could use asynchronous code heavily without the developers twisting their brains into knots.What does n't help matters is that Microsoft 's multi-threading APIs and libraries have been terrible since forever , and their new push towards multi-threaded programming has been to polish the turd a little .
They just do n't seem to have smart guys working for them any more who can design something as complex as a general purpose multi-threading library ( akin to OSX 's " Grand Central Dispatch " ) .
I 've seen Microsoft 's weak attempt at it in .NET 4 , and it 's just... sad .</tokentext>
<sentencetext>...the implementation sucks.Why for example does Windows Explorer decide to freeze ALL network connections when a single URN isn't quickly resolved?
Why is it that when my USB drive wakes up, all explorer windows freeze?
If you are trying to tell me there's no way using the current abstractions to implement this I say you're mad.
For that matter when a copy or move fails in Explorer, why can't I simply resume it once I've fixed whatever the problem is.
You're left piecing together what has and hasn't been moved.
File requests make up a good deal of what we're waiting for.
It's not the bus or the drives that are usually the limitation.
It's the shitty coding.
I can live with a hit at startup.
I can live with delays if I have to eat into swap.
But I'm sick and tired of basic functionality being missing or broken.That's because most Windows applications are written in C/C++, and it would be a royal pain to make them asynchronous.People confuse "multi-threaded" and "asynchronous".
They mean almost the same thing, but there's a substantial difference in development styles.
Multi-threaded (or "multi-core") is usually when an algorithm is split to run parallel across 'n' threads, asynchronous is when a program does something in the background without blocking.
The former is actually quite easy in C/C++, the latter is very hard, because tracking memory ownership across a bunch of threads is a huge pain.
It would help a lot if the core Windows user-space apps were re-written in a managed language like C# so that they could use asynchronous code heavily without the developers twisting their brains into knots.What doesn't help matters is that Microsoft's multi-threading APIs and libraries have been terrible since forever, and their new push towards multi-threaded programming has been to polish the turd a little.
They just don't seem to have smart guys working for them any more who can design something as complex as a general purpose multi-threading library (akin to OSX's "Grand Central Dispatch").
I've seen Microsoft's weak attempt at it in .NET 4, and it's just... sad.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561506</id>
	<title>Fist post!</title>
	<author>Anonymous</author>
	<datestamp>1269175800000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext>Fist post!</htmltext>
<tokenext>Fist post !</tokentext>
<sentencetext>Fist post!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562366</id>
	<title>Re:I hate to say it, but...</title>
	<author>Aladrin</author>
	<datestamp>1269181800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Well, I don't hate to say it.</p><p>My work OSX computer is a lot more powerful than my home Windows computer and I spend a -lot- more time waiting on the OSX machine.  In fact, Word and Excel are the apps I wait the most on.</p><p>Rather than showing off a flaw in the OS, it probably points to things not being optimized.</p></htmltext>
<tokenext>Well , I do n't hate to say it.My work OSX computer is a lot more powerful than my home Windows computer and I spend a -lot- more time waiting on the OSX machine .
In fact , Word and Excel are the apps I wait the most on.Rather than showing off a flaw in the OS , it probably points to things not being optimized .</tokentext>
<sentencetext>Well, I don't hate to say it.My work OSX computer is a lot more powerful than my home Windows computer and I spend a -lot- more time waiting on the OSX machine.
In fact, Word and Excel are the apps I wait the most on.Rather than showing off a flaw in the OS, it probably points to things not being optimized.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561530</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562182</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>NatasRevol</author>
	<datestamp>1269180540000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>Transaction is copying some files, failing in the middle, and not rolling back those copied over??</p><p>Hint.  It's not transaction.  It's just a bad piece of software that fails badly at doing it's basic job.  Handling files.</p></htmltext>
<tokenext>Transaction is copying some files , failing in the middle , and not rolling back those copied over ? ? Hint .
It 's not transaction .
It 's just a bad piece of software that fails badly at doing it 's basic job .
Handling files .</tokentext>
<sentencetext>Transaction is copying some files, failing in the middle, and not rolling back those copied over??Hint.
It's not transaction.
It's just a bad piece of software that fails badly at doing it's basic job.
Handling files.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561874</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564882</id>
	<title>Re:Grand Central?</title>
	<author>jimicus</author>
	<datestamp>1269255300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Isn't this the reason for Apple to have rolled out GrandCentral in Snow Leopard?  If so, it seems it's not THAT hard to do - at least not that hard for a non-Windows OS.</p></div><p>Windows has been playing catchup with OS X for years, it's just there are so many fanbois on both sides that it can be very hard to get an objective viewpoint.</p><p><a href="http://www.youtube.com/watch?v=n74mktpenx8" title="youtube.com">http://www.youtube.com/watch?v=n74mktpenx8</a> [youtube.com]</p><p>Of course, you may decide that I am a fanboi.  In which case, perhaps you should buy a Mac and develop your own objective viewpoint?</p></div>
	</htmltext>
<tokenext>Is n't this the reason for Apple to have rolled out GrandCentral in Snow Leopard ?
If so , it seems it 's not THAT hard to do - at least not that hard for a non-Windows OS.Windows has been playing catchup with OS X for years , it 's just there are so many fanbois on both sides that it can be very hard to get an objective viewpoint.http : //www.youtube.com/watch ? v = n74mktpenx8 [ youtube.com ] Of course , you may decide that I am a fanboi .
In which case , perhaps you should buy a Mac and develop your own objective viewpoint ?</tokentext>
<sentencetext>Isn't this the reason for Apple to have rolled out GrandCentral in Snow Leopard?
If so, it seems it's not THAT hard to do - at least not that hard for a non-Windows OS.Windows has been playing catchup with OS X for years, it's just there are so many fanbois on both sides that it can be very hard to get an objective viewpoint.http://www.youtube.com/watch?v=n74mktpenx8 [youtube.com]Of course, you may decide that I am a fanboi.
In which case, perhaps you should buy a Mac and develop your own objective viewpoint?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561554</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562272</id>
	<title>Re:4096 processors not enough?</title>
	<author>mswhippingboy</author>
	<datestamp>1269181080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I believe the Jaguar - Cray XT5-HE has about 225,000 cores... IBM's BlueGene nearly 300,000 cores. Yea, give each app it's own CPU - then what do we do with the other 299,990 cores?</htmltext>
<tokenext>I believe the Jaguar - Cray XT5-HE has about 225,000 cores... IBM 's BlueGene nearly 300,000 cores .
Yea , give each app it 's own CPU - then what do we do with the other 299,990 cores ?</tokentext>
<sentencetext>I believe the Jaguar - Cray XT5-HE has about 225,000 cores... IBM's BlueGene nearly 300,000 cores.
Yea, give each app it's own CPU - then what do we do with the other 299,990 cores?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561798</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563250</id>
	<title>Re:I hate to say it, but...</title>
	<author>jim\_v2000</author>
	<datestamp>1269189180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Pssst!  It's not supposed to be that slow.  Something is wrong with your particular machine.
<br> <br>
I almost never reboot my desktop and my laptop pretty much is either in hibernate or suspend when I'm not using it.  Both run Windows 7 and I don't have any performance issues with them.</htmltext>
<tokenext>Pssst !
It 's not supposed to be that slow .
Something is wrong with your particular machine .
I almost never reboot my desktop and my laptop pretty much is either in hibernate or suspend when I 'm not using it .
Both run Windows 7 and I do n't have any performance issues with them .</tokentext>
<sentencetext>Pssst!
It's not supposed to be that slow.
Something is wrong with your particular machine.
I almost never reboot my desktop and my laptop pretty much is either in hibernate or suspend when I'm not using it.
Both run Windows 7 and I don't have any performance issues with them.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561530</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562056</id>
	<title>Would Plan 9 suite the bill?</title>
	<author>MagikSlinger</author>
	<datestamp>1269179760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Plan 9 was designed around the idea of completely separate processes that could be running on separate CPUs.  Why not start there?</htmltext>
<tokenext>Plan 9 was designed around the idea of completely separate processes that could be running on separate CPUs .
Why not start there ?</tokentext>
<sentencetext>Plan 9 was designed around the idea of completely separate processes that could be running on separate CPUs.
Why not start there?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31568878</id>
	<title>Re:A more basic question</title>
	<author>radish</author>
	<datestamp>1269274020000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>iPhone isn't even slightly "instant on" - it takes at least a minute to boot an iPhone from off. What you're seeing most of the time is "screen off" mode. Unsurprisingly, switching the screen on &amp; cranking up the CPU clock doesn't take much time. Likewise, waking my Windows box up from sleep doesn't take very long either. Comparing modern OS software running on modern hardware I see little difference in boot times, or wake time from sleep - which would indicate that if MS are being lazy then so are Apple &amp; all the devs in the Linux &amp; BSD worlds. As for why my ST used to boot so much quicker, well the lack of discs helped, as did the lack of hardware variance (and thus lack of drivers to load &amp; start).</p></htmltext>
<tokenext>iPhone is n't even slightly " instant on " - it takes at least a minute to boot an iPhone from off .
What you 're seeing most of the time is " screen off " mode .
Unsurprisingly , switching the screen on &amp; cranking up the CPU clock does n't take much time .
Likewise , waking my Windows box up from sleep does n't take very long either .
Comparing modern OS software running on modern hardware I see little difference in boot times , or wake time from sleep - which would indicate that if MS are being lazy then so are Apple &amp; all the devs in the Linux &amp; BSD worlds .
As for why my ST used to boot so much quicker , well the lack of discs helped , as did the lack of hardware variance ( and thus lack of drivers to load &amp; start ) .</tokentext>
<sentencetext>iPhone isn't even slightly "instant on" - it takes at least a minute to boot an iPhone from off.
What you're seeing most of the time is "screen off" mode.
Unsurprisingly, switching the screen on &amp; cranking up the CPU clock doesn't take much time.
Likewise, waking my Windows box up from sleep doesn't take very long either.
Comparing modern OS software running on modern hardware I see little difference in boot times, or wake time from sleep - which would indicate that if MS are being lazy then so are Apple &amp; all the devs in the Linux &amp; BSD worlds.
As for why my ST used to boot so much quicker, well the lack of discs helped, as did the lack of hardware variance (and thus lack of drivers to load &amp; start).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561736</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31569032</id>
	<title>Re:It's not even about multiple cores</title>
	<author>psbrogna</author>
	<datestamp>1269274380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Careful what you wish for. I for one would like my OS to make data integrity a higher priority then the user interface. Sure, I like my mouse pointer to move smoothly across the screen and for a window to close when I click on the close window control, but not if it means my bits are spooling to<nobr> <wbr></nobr>/dev/null.</htmltext>
<tokenext>Careful what you wish for .
I for one would like my OS to make data integrity a higher priority then the user interface .
Sure , I like my mouse pointer to move smoothly across the screen and for a window to close when I click on the close window control , but not if it means my bits are spooling to /dev/null .</tokentext>
<sentencetext>Careful what you wish for.
I for one would like my OS to make data integrity a higher priority then the user interface.
Sure, I like my mouse pointer to move smoothly across the screen and for a window to close when I click on the close window control, but not if it means my bits are spooling to /dev/null.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564182</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561874</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>cdrnet</author>
	<datestamp>1269178440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>That's called transaction and is a good thing (except the part where it doesn't tell exactly why it failed). I'd hate if it would not behave like that.</p></htmltext>
<tokenext>That 's called transaction and is a good thing ( except the part where it does n't tell exactly why it failed ) .
I 'd hate if it would not behave like that .</tokentext>
<sentencetext>That's called transaction and is a good thing (except the part where it doesn't tell exactly why it failed).
I'd hate if it would not behave like that.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561772</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31568862</id>
	<title>Re:The problem: the event-driven model</title>
	<author>thesuperbigfrog</author>
	<datestamp>1269273960000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p><div class="quote"><p>Most languages still handle concurrency very badly.  C and C++ are clueless about concurrency.  Java and C# know a little about it. Erlang and Go take it more seriously, but are intended for server-side processing.  So GUI programmers don't get much help from the language.</p><p>In particular, in C and C++, there's locking, but there's no way within the language to <i>even talk about</i> which locks protect which data. Thus, concurrency can't be analyzed automatically. This has become a huge mess in C/C++, as more attributes ("mutable", "volatile", per-thread storage, etc.) have been bolted on to give some hints to the compiler. There's still race condition trouble between compilers and CPUs with long look-ahead and programs with heavy concurrency.</p><p>We need better hard-compiled languages that don't punt on concurrency issues.  C++ could potentially have been fixed, but the C++ committee is in denial about the problem; they're still in template la-la land, adding features few need and fewer will use correctly, rather than trying to do something about reliability issues.  C# is only slightly better; Microsoft Research did some work on <a href="http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=A494B5D3E175B81187E5BBF4BFFA2FB9?doi=10.1.1.3.8971&amp;rep=rep1&amp;type=pdf" title="psu.edu">"Polyphonic C#"</a> [psu.edu], but nobody seems to use that.  Yes, there are lots of obscure academic languages that address concurrency.  Few are used in the real world.</p></div><p>Ada 2005's task model is a real world, production quality approach to include concurrency in a hard-compiled language.  Ada isn't exactly known for its GUI libraries (there is GtkAda), but it could be used as a foundation for an improved concurrent GUI paradigm.</p><p><a href="http://books.google.com/books?id=iilIj3JXNrAC&amp;dq=ada+concurrency+support&amp;source=gbs\_navlinks\_s" title="google.com">This book</a> [google.com] covers the subject quite well.  </p></div>
	</htmltext>
<tokenext>Most languages still handle concurrency very badly .
C and C + + are clueless about concurrency .
Java and C # know a little about it .
Erlang and Go take it more seriously , but are intended for server-side processing .
So GUI programmers do n't get much help from the language.In particular , in C and C + + , there 's locking , but there 's no way within the language to even talk about which locks protect which data .
Thus , concurrency ca n't be analyzed automatically .
This has become a huge mess in C/C + + , as more attributes ( " mutable " , " volatile " , per-thread storage , etc .
) have been bolted on to give some hints to the compiler .
There 's still race condition trouble between compilers and CPUs with long look-ahead and programs with heavy concurrency.We need better hard-compiled languages that do n't punt on concurrency issues .
C + + could potentially have been fixed , but the C + + committee is in denial about the problem ; they 're still in template la-la land , adding features few need and fewer will use correctly , rather than trying to do something about reliability issues .
C # is only slightly better ; Microsoft Research did some work on " Polyphonic C # " [ psu.edu ] , but nobody seems to use that .
Yes , there are lots of obscure academic languages that address concurrency .
Few are used in the real world.Ada 2005 's task model is a real world , production quality approach to include concurrency in a hard-compiled language .
Ada is n't exactly known for its GUI libraries ( there is GtkAda ) , but it could be used as a foundation for an improved concurrent GUI paradigm.This book [ google.com ] covers the subject quite well .</tokentext>
<sentencetext>Most languages still handle concurrency very badly.
C and C++ are clueless about concurrency.
Java and C# know a little about it.
Erlang and Go take it more seriously, but are intended for server-side processing.
So GUI programmers don't get much help from the language.In particular, in C and C++, there's locking, but there's no way within the language to even talk about which locks protect which data.
Thus, concurrency can't be analyzed automatically.
This has become a huge mess in C/C++, as more attributes ("mutable", "volatile", per-thread storage, etc.
) have been bolted on to give some hints to the compiler.
There's still race condition trouble between compilers and CPUs with long look-ahead and programs with heavy concurrency.We need better hard-compiled languages that don't punt on concurrency issues.
C++ could potentially have been fixed, but the C++ committee is in denial about the problem; they're still in template la-la land, adding features few need and fewer will use correctly, rather than trying to do something about reliability issues.
C# is only slightly better; Microsoft Research did some work on "Polyphonic C#" [psu.edu], but nobody seems to use that.
Yes, there are lots of obscure academic languages that address concurrency.
Few are used in the real world.Ada 2005's task model is a real world, production quality approach to include concurrency in a hard-compiled language.
Ada isn't exactly known for its GUI libraries (there is GtkAda), but it could be used as a foundation for an improved concurrent GUI paradigm.This book [google.com] covers the subject quite well.  
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561934</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561534</id>
	<title>IT MIGHT BE WAITING</title>
	<author>Anonymous</author>
	<datestamp>1269176040000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>FOR IO.</p><p>DON'T LAND THERE.</p></htmltext>
<tokenext>FOR IO.DO N'T LAND THERE .</tokentext>
<sentencetext>FOR IO.DON'T LAND THERE.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561728</id>
	<title>10^10 CPUs and I still have to wait ...</title>
	<author>mi</author>
	<datestamp>1269177540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>... for NFS to give up on a disconnected server... By the original design and the continuing default settings, the stuck processes are neither killable nor interruptible. You can reboot the whole system, but you can't kill one process.

</p><p>Hurray for the OS designers!</p></htmltext>
<tokenext>... for NFS to give up on a disconnected server... By the original design and the continuing default settings , the stuck processes are neither killable nor interruptible .
You can reboot the whole system , but you ca n't kill one process .
Hurray for the OS designers !</tokentext>
<sentencetext>... for NFS to give up on a disconnected server... By the original design and the continuing default settings, the stuck processes are neither killable nor interruptible.
You can reboot the whole system, but you can't kill one process.
Hurray for the OS designers!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563926</id>
	<title>Re:4096 processors not enough?</title>
	<author>noidentity</author>
	<datestamp>1269196020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>The largest single system image I'm aware of runs Linux on a 4096 processor SGI machine with 17TB RAM. Maybe He means that Windows needs rework?</p></div></blockquote><p>640 cores should be enough for anybody.</p></div>
	</htmltext>
<tokenext>The largest single system image I 'm aware of runs Linux on a 4096 processor SGI machine with 17TB RAM .
Maybe He means that Windows needs rework ? 640 cores should be enough for anybody .</tokentext>
<sentencetext>The largest single system image I'm aware of runs Linux on a 4096 processor SGI machine with 17TB RAM.
Maybe He means that Windows needs rework?640 cores should be enough for anybody.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561798</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558</id>
	<title>Current architecture flawed but workable BUT....</title>
	<author>Anonymous</author>
	<datestamp>1269176160000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>...the implementation sucks.</p><p>Why for example does Windows Explorer decide to freeze ALL network connections when a single URN isn't quickly resolved? Why is it that when my USB drive wakes up, all explorer windows freeze? If you are trying to tell me there's no way using the current abstractions to implement this I say you're mad. For that matter when a copy or move fails in Explorer, why can't I simply resume it once I've fixed whatever the problem is. You're left piecing together what has and hasn't been moved. File requests make up a good deal of what we're waiting for. It's not the bus or the drives that are usually the limitation. It's the shitty coding. I can live with a hit at startup. I can live with delays if I have to eat into swap. But I'm sick and tired of basic functionality being missing or broken.</p></htmltext>
<tokenext>...the implementation sucks.Why for example does Windows Explorer decide to freeze ALL network connections when a single URN is n't quickly resolved ?
Why is it that when my USB drive wakes up , all explorer windows freeze ?
If you are trying to tell me there 's no way using the current abstractions to implement this I say you 're mad .
For that matter when a copy or move fails in Explorer , why ca n't I simply resume it once I 've fixed whatever the problem is .
You 're left piecing together what has and has n't been moved .
File requests make up a good deal of what we 're waiting for .
It 's not the bus or the drives that are usually the limitation .
It 's the shitty coding .
I can live with a hit at startup .
I can live with delays if I have to eat into swap .
But I 'm sick and tired of basic functionality being missing or broken .</tokentext>
<sentencetext>...the implementation sucks.Why for example does Windows Explorer decide to freeze ALL network connections when a single URN isn't quickly resolved?
Why is it that when my USB drive wakes up, all explorer windows freeze?
If you are trying to tell me there's no way using the current abstractions to implement this I say you're mad.
For that matter when a copy or move fails in Explorer, why can't I simply resume it once I've fixed whatever the problem is.
You're left piecing together what has and hasn't been moved.
File requests make up a good deal of what we're waiting for.
It's not the bus or the drives that are usually the limitation.
It's the shitty coding.
I can live with a hit at startup.
I can live with delays if I have to eat into swap.
But I'm sick and tired of basic functionality being missing or broken.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564104</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>Anonymous</author>
	<datestamp>1269198720000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>Windows explorer sucks.  It always just abandons copies after a fail - even if you're moving thousands of files over a network.  Yes, you're left wondering which files did/didn't make it.  It's actually easier to sometimes copy all the files you want to shift locally, then move the copy, so that you can resume after a fail. It's laughable you have to do this, however.</p><p>But it's not a concurrency issue, and neither, really, are the first 2 problems you mention.  They're also down to Windows Explorer sucking.</p></div><p>They seem to have fixed those issues with Windows 7. But ya, it really pissed me off too. The other thing that has made me literally destroy hardware is the way XP would lock your whole goddamn system if a network resource stopped responding.... even when you weren't trying to access it.</p></div>
	</htmltext>
<tokenext>Windows explorer sucks .
It always just abandons copies after a fail - even if you 're moving thousands of files over a network .
Yes , you 're left wondering which files did/did n't make it .
It 's actually easier to sometimes copy all the files you want to shift locally , then move the copy , so that you can resume after a fail .
It 's laughable you have to do this , however.But it 's not a concurrency issue , and neither , really , are the first 2 problems you mention .
They 're also down to Windows Explorer sucking.They seem to have fixed those issues with Windows 7 .
But ya , it really pissed me off too .
The other thing that has made me literally destroy hardware is the way XP would lock your whole goddamn system if a network resource stopped responding.... even when you were n't trying to access it .</tokentext>
<sentencetext>Windows explorer sucks.
It always just abandons copies after a fail - even if you're moving thousands of files over a network.
Yes, you're left wondering which files did/didn't make it.
It's actually easier to sometimes copy all the files you want to shift locally, then move the copy, so that you can resume after a fail.
It's laughable you have to do this, however.But it's not a concurrency issue, and neither, really, are the first 2 problems you mention.
They're also down to Windows Explorer sucking.They seem to have fixed those issues with Windows 7.
But ya, it really pissed me off too.
The other thing that has made me literally destroy hardware is the way XP would lock your whole goddamn system if a network resource stopped responding.... even when you weren't trying to access it.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561772</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564070</id>
	<title>Re:Multithreading is the problem, not the answer</title>
	<author>drsmithy</author>
	<datestamp>1269198120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p> <i>The old paradigms from the 20th century do not work anymore because they were not designed for parallel processing.</i>
</p><p>The first multiprocessor machine appeared in 1961.  Do you really think computer science hasn't changed since then ?</p></htmltext>
<tokenext>The old paradigms from the 20th century do not work anymore because they were not designed for parallel processing .
The first multiprocessor machine appeared in 1961 .
Do you really think computer science has n't changed since then ?</tokentext>
<sentencetext> The old paradigms from the 20th century do not work anymore because they were not designed for parallel processing.
The first multiprocessor machine appeared in 1961.
Do you really think computer science hasn't changed since then ?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561816</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31565334</id>
	<title>Why should you ever be waiting for your computer?</title>
	<author>Ihlosi</author>
	<datestamp>1269262140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Because CPU power isn't the bottleneck in most systems, duh. What's slowing todays computer down are things like mass store access times and bandwidth, RAM size and bandwith, etc.</p></htmltext>
<tokenext>Because CPU power is n't the bottleneck in most systems , duh .
What 's slowing todays computer down are things like mass store access times and bandwidth , RAM size and bandwith , etc .</tokentext>
<sentencetext>Because CPU power isn't the bottleneck in most systems, duh.
What's slowing todays computer down are things like mass store access times and bandwidth, RAM size and bandwith, etc.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562324</id>
	<title>You don't need a microkernel for that</title>
	<author>mattdm</author>
	<datestamp>1269181440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You don't even need one which can run in different threads. You just need one which can flag various tasks as "uh, come back to that in a bit", and then very quickly go to the next thing on the list.</p></htmltext>
<tokenext>You do n't even need one which can run in different threads .
You just need one which can flag various tasks as " uh , come back to that in a bit " , and then very quickly go to the next thing on the list .</tokentext>
<sentencetext>You don't even need one which can run in different threads.
You just need one which can flag various tasks as "uh, come back to that in a bit", and then very quickly go to the next thing on the list.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561768</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561802</id>
	<title>Re:I hate to say it, but...</title>
	<author>GIL\_Dude</author>
	<datestamp>1269177900000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext>Are you running a 9 year old version of OSX too, or are you comparing a two generation old Windows version to a nice new Mac version? It really sounds like you are comparing apples (snicker) to oranges. After all, both Vista and Windows 7 have no problem running for a long, long time between reboots and don't get slow during that time.</htmltext>
<tokenext>Are you running a 9 year old version of OSX too , or are you comparing a two generation old Windows version to a nice new Mac version ?
It really sounds like you are comparing apples ( snicker ) to oranges .
After all , both Vista and Windows 7 have no problem running for a long , long time between reboots and do n't get slow during that time .</tokentext>
<sentencetext>Are you running a 9 year old version of OSX too, or are you comparing a two generation old Windows version to a nice new Mac version?
It really sounds like you are comparing apples (snicker) to oranges.
After all, both Vista and Windows 7 have no problem running for a long, long time between reboots and don't get slow during that time.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561530</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561822</id>
	<title>Re:Luckily OSX is Already Has MultiCore Tech</title>
	<author>Anonymous</author>
	<datestamp>1269178020000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>I'm not sure I get it - GCD just looks like a threadpool library. Windows has had a <a href="http://msdn.microsoft.com/en-us/library/ms686766(VS.85).aspx" title="microsoft.com" rel="nofollow">built-in threadpool API</a> [microsoft.com] that's been available since Windows 2000, and it seems to do pretty much the same thing as GCD.</p></htmltext>
<tokenext>I 'm not sure I get it - GCD just looks like a threadpool library .
Windows has had a built-in threadpool API [ microsoft.com ] that 's been available since Windows 2000 , and it seems to do pretty much the same thing as GCD .</tokentext>
<sentencetext>I'm not sure I get it - GCD just looks like a threadpool library.
Windows has had a built-in threadpool API [microsoft.com] that's been available since Windows 2000, and it seems to do pretty much the same thing as GCD.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561624</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562702</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>Anonymous</author>
	<datestamp>1269184380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I found a solution to this long ago. After the copies fail, select all of the files and copy them again. When the "this file already exists" prompt is displayed, place your stapler, jam your pencil, or do something of the like to hold the "n" button down since there isn't any way to say "repeat". I recommend having notepad running in the background to catch all of the extra "n"'s.</p><p>Luckily, this isn't an issue in Win 7.</p></htmltext>
<tokenext>I found a solution to this long ago .
After the copies fail , select all of the files and copy them again .
When the " this file already exists " prompt is displayed , place your stapler , jam your pencil , or do something of the like to hold the " n " button down since there is n't any way to say " repeat " .
I recommend having notepad running in the background to catch all of the extra " n " 's.Luckily , this is n't an issue in Win 7 .</tokentext>
<sentencetext>I found a solution to this long ago.
After the copies fail, select all of the files and copy them again.
When the "this file already exists" prompt is displayed, place your stapler, jam your pencil, or do something of the like to hold the "n" button down since there isn't any way to say "repeat".
I recommend having notepad running in the background to catch all of the extra "n"'s.Luckily, this isn't an issue in Win 7.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561772</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562174</id>
	<title>Re:The problem isnt even that simple</title>
	<author>Anonymous</author>
	<datestamp>1269180480000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Windows parallelizes this.  It does very little processing in the IRQ handler; the bulk of the work is done in a Deferred Procedure Call (DPC), which can run on any thread (i.e. any CPU).</p><p>The bottleneck is the speed of the hardware, not the speed at which the OS services it.</p><p>Or are you using a twenty year old OS?</p></htmltext>
<tokenext>Windows parallelizes this .
It does very little processing in the IRQ handler ; the bulk of the work is done in a Deferred Procedure Call ( DPC ) , which can run on any thread ( i.e .
any CPU ) .The bottleneck is the speed of the hardware , not the speed at which the OS services it.Or are you using a twenty year old OS ?</tokentext>
<sentencetext>Windows parallelizes this.
It does very little processing in the IRQ handler; the bulk of the work is done in a Deferred Procedure Call (DPC), which can run on any thread (i.e.
any CPU).The bottleneck is the speed of the hardware, not the speed at which the OS services it.Or are you using a twenty year old OS?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561542</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562610</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>Tablizer</author>
	<datestamp>1269183660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I smell an <b>open-source</b> opportunity to replace <b>Windows Explorer</b>. If each component/function of Windows is slowly replaced by a better open-source version, then eventually people won't need Windows at all without the learning curve transition of Linux distros.</p></htmltext>
<tokenext>I smell an open-source opportunity to replace Windows Explorer .
If each component/function of Windows is slowly replaced by a better open-source version , then eventually people wo n't need Windows at all without the learning curve transition of Linux distros .</tokentext>
<sentencetext>I smell an open-source opportunity to replace Windows Explorer.
If each component/function of Windows is slowly replaced by a better open-source version, then eventually people won't need Windows at all without the learning curve transition of Linux distros.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561772</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561846</id>
	<title>Re:Luckily OSX is Already Has MultiCore Tech</title>
	<author>PhrostyMcByte</author>
	<datestamp>1269178260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>What he is trying to do is get enough cores on a CPU so that each thread or process can run on it's own core.  He essentially wants to remove the scheduler from the OS, so that there would be no time slices -- stuff would just run straight with no context switching.  This is entirely different from GCD, which is an implementation of task-based parallelism backed by thread pools.</p><p>This would really only work on CPUs with a few thousand cores, and even then the CPUs would need to have some very intelligent power management for cores that aren't being used, or are in use but waiting on something like I/O.</p></htmltext>
<tokenext>What he is trying to do is get enough cores on a CPU so that each thread or process can run on it 's own core .
He essentially wants to remove the scheduler from the OS , so that there would be no time slices -- stuff would just run straight with no context switching .
This is entirely different from GCD , which is an implementation of task-based parallelism backed by thread pools.This would really only work on CPUs with a few thousand cores , and even then the CPUs would need to have some very intelligent power management for cores that are n't being used , or are in use but waiting on something like I/O .</tokentext>
<sentencetext>What he is trying to do is get enough cores on a CPU so that each thread or process can run on it's own core.
He essentially wants to remove the scheduler from the OS, so that there would be no time slices -- stuff would just run straight with no context switching.
This is entirely different from GCD, which is an implementation of task-based parallelism backed by thread pools.This would really only work on CPUs with a few thousand cores, and even then the CPUs would need to have some very intelligent power management for cores that aren't being used, or are in use but waiting on something like I/O.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561624</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561994</id>
	<title>Re:Microkernel?</title>
	<author>Amanieu</author>
	<datestamp>1269179280000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>Actually most current monolithic kernels are multithreaded, so they can have one thread working on reading that CD, while another threads handles user input, etc. The only difference from microkernels is that it's all in a single address space.</htmltext>
<tokenext>Actually most current monolithic kernels are multithreaded , so they can have one thread working on reading that CD , while another threads handles user input , etc .
The only difference from microkernels is that it 's all in a single address space .</tokentext>
<sentencetext>Actually most current monolithic kernels are multithreaded, so they can have one thread working on reading that CD, while another threads handles user input, etc.
The only difference from microkernels is that it's all in a single address space.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561768</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562544</id>
	<title>Re:The problem isnt even that simple</title>
	<author>sjames</author>
	<datestamp>1269183300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It's not accessing different devices that's the problem, generally, you give the device work, then do something else while it completes. The problem is that the individual device is serial. You have a half dozen disk I/Os to do, so you queue them up and they are executed one at a time.</p></htmltext>
<tokenext>It 's not accessing different devices that 's the problem , generally , you give the device work , then do something else while it completes .
The problem is that the individual device is serial .
You have a half dozen disk I/Os to do , so you queue them up and they are executed one at a time .</tokentext>
<sentencetext>It's not accessing different devices that's the problem, generally, you give the device work, then do something else while it completes.
The problem is that the individual device is serial.
You have a half dozen disk I/Os to do, so you queue them up and they are executed one at a time.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561542</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31566068</id>
	<title>Re:Fist post!</title>
	<author>Anonymous</author>
	<datestamp>1269266700000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>That's actually pretty good typing with your fists. Do you have a comically large keyboard?</p></div><p>Yes, yes I do !</p><p>~Sent from my iPad</p></div>
	</htmltext>
<tokenext>That 's actually pretty good typing with your fists .
Do you have a comically large keyboard ? Yes , yes I do ! ~ Sent from my iPad</tokentext>
<sentencetext>That's actually pretty good typing with your fists.
Do you have a comically large keyboard?Yes, yes I do !~Sent from my iPad
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562332</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563496</id>
	<title>Re:Grand Central?</title>
	<author>Anonymous</author>
	<datestamp>1269191700000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Hey Apple fanboy: he's not talking about adding another threading library.  Yeah, Grand Central Dispatch is a great idea, but it does not solve the fundamental problem of the operating system not scaling well to multiple processors.  Probert and basically every other on of the multicore guys have been saying this for years: we need to re-think the model of what a computer is, from the hardware up.  x86 sucks -- ARM does a lot of good, but it's just not the change we need, at least in the opinion of this (ITOOT) AC.  While it may not be obvious now, in another few years when users demand that they want MORE things and that they want it RIGHT NOW, you will see.</p><p>Read up on Lewis and Berg or some of the things that Yale Patt discusses: we need to re-think the processor.  Then we need to re-think the languages we use to describe our problems to a computer (C has served us well, but it's time we moved on to functional programming languages).  After that, we need to re-think the operating system.  And through all of this, we need to support all of our legacy, x86-based programs.  Seem like a non-trivial problem now?</p></htmltext>
<tokenext>Hey Apple fanboy : he 's not talking about adding another threading library .
Yeah , Grand Central Dispatch is a great idea , but it does not solve the fundamental problem of the operating system not scaling well to multiple processors .
Probert and basically every other on of the multicore guys have been saying this for years : we need to re-think the model of what a computer is , from the hardware up .
x86 sucks -- ARM does a lot of good , but it 's just not the change we need , at least in the opinion of this ( ITOOT ) AC .
While it may not be obvious now , in another few years when users demand that they want MORE things and that they want it RIGHT NOW , you will see.Read up on Lewis and Berg or some of the things that Yale Patt discusses : we need to re-think the processor .
Then we need to re-think the languages we use to describe our problems to a computer ( C has served us well , but it 's time we moved on to functional programming languages ) .
After that , we need to re-think the operating system .
And through all of this , we need to support all of our legacy , x86-based programs .
Seem like a non-trivial problem now ?</tokentext>
<sentencetext>Hey Apple fanboy: he's not talking about adding another threading library.
Yeah, Grand Central Dispatch is a great idea, but it does not solve the fundamental problem of the operating system not scaling well to multiple processors.
Probert and basically every other on of the multicore guys have been saying this for years: we need to re-think the model of what a computer is, from the hardware up.
x86 sucks -- ARM does a lot of good, but it's just not the change we need, at least in the opinion of this (ITOOT) AC.
While it may not be obvious now, in another few years when users demand that they want MORE things and that they want it RIGHT NOW, you will see.Read up on Lewis and Berg or some of the things that Yale Patt discusses: we need to re-think the processor.
Then we need to re-think the languages we use to describe our problems to a computer (C has served us well, but it's time we moved on to functional programming languages).
After that, we need to re-think the operating system.
And through all of this, we need to support all of our legacy, x86-based programs.
Seem like a non-trivial problem now?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561554</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31575558</id>
	<title>Oh yeh, let's re-invent Mac OS 7.</title>
	<author>argent</author>
	<datestamp>1269253080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Remember how much Mac OS 7-9 sucked? One reason was that every application had to re-implement all kinds of resource management code that the OS normally took care of. Shared resources, and a CPU is a shared resource even if you have dozens of the things, just as files are a shared resource and managed by the OS even though you have hundreds of thousands of them, should be managed centrally.</p><p>This doesn't mean that the specific APIs devised to share single processors between hundreds of programs are necessarily ideal, but there's a hell of a lot of a difference between that and throwing up your hands and saying "let's just give each program its own VM".</p></htmltext>
<tokenext>Remember how much Mac OS 7-9 sucked ?
One reason was that every application had to re-implement all kinds of resource management code that the OS normally took care of .
Shared resources , and a CPU is a shared resource even if you have dozens of the things , just as files are a shared resource and managed by the OS even though you have hundreds of thousands of them , should be managed centrally.This does n't mean that the specific APIs devised to share single processors between hundreds of programs are necessarily ideal , but there 's a hell of a lot of a difference between that and throwing up your hands and saying " let 's just give each program its own VM " .</tokentext>
<sentencetext>Remember how much Mac OS 7-9 sucked?
One reason was that every application had to re-implement all kinds of resource management code that the OS normally took care of.
Shared resources, and a CPU is a shared resource even if you have dozens of the things, just as files are a shared resource and managed by the OS even though you have hundreds of thousands of them, should be managed centrally.This doesn't mean that the specific APIs devised to share single processors between hundreds of programs are necessarily ideal, but there's a hell of a lot of a difference between that and throwing up your hands and saying "let's just give each program its own VM".</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564296</id>
	<title>Re:Why?</title>
	<author>RightSaidFred99</author>
	<datestamp>1269288240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Thank God we have you to patronizingly explain to the multi-decade experienced kernel/OS developer all about multiprocessor scheduling and it being "NP-complete"!</htmltext>
<tokenext>Thank God we have you to patronizingly explain to the multi-decade experienced kernel/OS developer all about multiprocessor scheduling and it being " NP-complete " !</tokentext>
<sentencetext>Thank God we have you to patronizingly explain to the multi-decade experienced kernel/OS developer all about multiprocessor scheduling and it being "NP-complete"!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561556</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31571240</id>
	<title>Why not make the multiple CPUs look like only one?</title>
	<author>Anonymous</author>
	<datestamp>1269281280000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Have one virtual CPU that is really 4 cores.  So it can issue 4 cores X 6 instructions/cycle = 24 full instructions dispatched in one cycle.  OK, maybe overkill for small programs.</p></htmltext>
<tokenext>Have one virtual CPU that is really 4 cores .
So it can issue 4 cores X 6 instructions/cycle = 24 full instructions dispatched in one cycle .
OK , maybe overkill for small programs .</tokentext>
<sentencetext>Have one virtual CPU that is really 4 cores.
So it can issue 4 cores X 6 instructions/cycle = 24 full instructions dispatched in one cycle.
OK, maybe overkill for small programs.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563688</id>
	<title>Re:freeze during DNS resolution</title>
	<author>Anonymous</author>
	<datestamp>1269193140000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I haven't seen the code, but my guess is that Explorer freezes because it was written back when <tt>gethostbyname()</tt> was the only option for DNS lookup. Unfortunately, that function was both synchronous and non-reentrant, so it might have helped if they'd updated the code to use <tt>WSAAsyncGetHostByName()</tt>, but that function actually isn't reentrant either!</p><p>MSDN says:</p><p><div class="quote"><p>The WSAAsyncGetHostByName function is not designed to provide parallel resolution of several names. Therefore, applications that issue several requests should not expect them to be executed concurrently. Alternatively, applications can start another thread and use the getaddrinfo function to resolve names in an IP-version agnostic manner. Developers creating Windows Sockets 2 applications are urged to use the getaddrinfo function to enable smooth transition to IPv6 compatibility.</p></div><p>So the current "best practice" is to use <tt>getaddrinfo()</tt> in another thread. I guess the Explorer team didn't get the memo.</p></div>
	</htmltext>
<tokenext>I have n't seen the code , but my guess is that Explorer freezes because it was written back when gethostbyname ( ) was the only option for DNS lookup .
Unfortunately , that function was both synchronous and non-reentrant , so it might have helped if they 'd updated the code to use WSAAsyncGetHostByName ( ) , but that function actually is n't reentrant either ! MSDN says : The WSAAsyncGetHostByName function is not designed to provide parallel resolution of several names .
Therefore , applications that issue several requests should not expect them to be executed concurrently .
Alternatively , applications can start another thread and use the getaddrinfo function to resolve names in an IP-version agnostic manner .
Developers creating Windows Sockets 2 applications are urged to use the getaddrinfo function to enable smooth transition to IPv6 compatibility.So the current " best practice " is to use getaddrinfo ( ) in another thread .
I guess the Explorer team did n't get the memo .</tokentext>
<sentencetext>I haven't seen the code, but my guess is that Explorer freezes because it was written back when gethostbyname() was the only option for DNS lookup.
Unfortunately, that function was both synchronous and non-reentrant, so it might have helped if they'd updated the code to use WSAAsyncGetHostByName(), but that function actually isn't reentrant either!MSDN says:The WSAAsyncGetHostByName function is not designed to provide parallel resolution of several names.
Therefore, applications that issue several requests should not expect them to be executed concurrently.
Alternatively, applications can start another thread and use the getaddrinfo function to resolve names in an IP-version agnostic manner.
Developers creating Windows Sockets 2 applications are urged to use the getaddrinfo function to enable smooth transition to IPv6 compatibility.So the current "best practice" is to use getaddrinfo() in another thread.
I guess the Explorer team didn't get the memo.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31571038</id>
	<title>Re:Duh</title>
	<author>BadOctopus</author>
	<datestamp>1269280740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p><div class="quote"><p>Why should you ever, with all this parallel hardware, ever be waiting for your computer?'</p></div><p>For a lot of problems, for the same reason that some guy who just married 8 brides will still have to wait for his baby.</p></div><p>Impotency?</p></div>
	</htmltext>
<tokenext>Why should you ever , with all this parallel hardware , ever be waiting for your computer ?
'For a lot of problems , for the same reason that some guy who just married 8 brides will still have to wait for his baby.Impotency ?</tokentext>
<sentencetext>Why should you ever, with all this parallel hardware, ever be waiting for your computer?
'For a lot of problems, for the same reason that some guy who just married 8 brides will still have to wait for his baby.Impotency?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561910</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564310</id>
	<title>Re:Luckily OSX is Already Has MultiCore Tech</title>
	<author>RightSaidFred99</author>
	<datestamp>1269288480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>No, you pretty much nailed it.  That's (almost) all it is, just implemented as an environment service.  What makes it so revolutionary is the cool name and the fact that it runs on an Apple product.</htmltext>
<tokenext>No , you pretty much nailed it .
That 's ( almost ) all it is , just implemented as an environment service .
What makes it so revolutionary is the cool name and the fact that it runs on an Apple product .</tokentext>
<sentencetext>No, you pretty much nailed it.
That's (almost) all it is, just implemented as an environment service.
What makes it so revolutionary is the cool name and the fact that it runs on an Apple product.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561822</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31579612</id>
	<title>So let me see...</title>
	<author>Anonymous</author>
	<datestamp>1269280380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I run linux.  When I build a kernel, I can specify how many cores get used.  Symmetric multi-threading, also known by a certain companies trade name "hyperthreading", causes the number of execution threads to double (on my core i7-920 it shows up as eight).  When building a kernel, it shows 8 cores all running at 100\%.  When compiling any other program, I can also specify how many cores, and it uses all of them fully.  When rendering a blender image, I can specify how many threads Blender uses (up to a maximum of 64), and Blender seems to use them all up to a maximum of 100\% each (800\% total).  Now I know that Linux is used a lot on computers much larger than mine.  I go to sites like www.top500.org and see computers with hundreds of thousands of multi-core processors, running at the same speed as mine or faster.  These machines are usually not given jobs that take more that two weeks to finish.  They are not multi-user machines, but single user, single job, 10,000 processors or 40,000 cores, taking two weeks to finish (larger jobs not accepted).  Now if these computers with 10000 times as much power as mine, take two weeks to finish some jobs, why is it wrong for my computer, when computing pi to 750,000,000 digits, to take nearly half an hour?  It consumes nearly all of the 12GB of ram on my system to calculate it out, and with my terminal spewing as fast as it can, 45 minutes to print out the results (if I can display 13000 digits on the screen at once, thats a measly 21 screen fulls per second).  I suspect the problem might be that some jobs are just large.  I suspect buddy is talking about spread sheets and word processors and clippy (he's a microsoft guy after all).  My commodore vic20 had really great performance.  If microsoft needs to look for performance, they should look to the vic20 as a way to make their systems better.  After all, windows is only at 7, and commodore was already at 20.  Thats nearly 3 times as fast. You can all thank me now.</p></htmltext>
<tokenext>I run linux .
When I build a kernel , I can specify how many cores get used .
Symmetric multi-threading , also known by a certain companies trade name " hyperthreading " , causes the number of execution threads to double ( on my core i7-920 it shows up as eight ) .
When building a kernel , it shows 8 cores all running at 100 \ % .
When compiling any other program , I can also specify how many cores , and it uses all of them fully .
When rendering a blender image , I can specify how many threads Blender uses ( up to a maximum of 64 ) , and Blender seems to use them all up to a maximum of 100 \ % each ( 800 \ % total ) .
Now I know that Linux is used a lot on computers much larger than mine .
I go to sites like www.top500.org and see computers with hundreds of thousands of multi-core processors , running at the same speed as mine or faster .
These machines are usually not given jobs that take more that two weeks to finish .
They are not multi-user machines , but single user , single job , 10,000 processors or 40,000 cores , taking two weeks to finish ( larger jobs not accepted ) .
Now if these computers with 10000 times as much power as mine , take two weeks to finish some jobs , why is it wrong for my computer , when computing pi to 750,000,000 digits , to take nearly half an hour ?
It consumes nearly all of the 12GB of ram on my system to calculate it out , and with my terminal spewing as fast as it can , 45 minutes to print out the results ( if I can display 13000 digits on the screen at once , thats a measly 21 screen fulls per second ) .
I suspect the problem might be that some jobs are just large .
I suspect buddy is talking about spread sheets and word processors and clippy ( he 's a microsoft guy after all ) .
My commodore vic20 had really great performance .
If microsoft needs to look for performance , they should look to the vic20 as a way to make their systems better .
After all , windows is only at 7 , and commodore was already at 20 .
Thats nearly 3 times as fast .
You can all thank me now .</tokentext>
<sentencetext>I run linux.
When I build a kernel, I can specify how many cores get used.
Symmetric multi-threading, also known by a certain companies trade name "hyperthreading", causes the number of execution threads to double (on my core i7-920 it shows up as eight).
When building a kernel, it shows 8 cores all running at 100\%.
When compiling any other program, I can also specify how many cores, and it uses all of them fully.
When rendering a blender image, I can specify how many threads Blender uses (up to a maximum of 64), and Blender seems to use them all up to a maximum of 100\% each (800\% total).
Now I know that Linux is used a lot on computers much larger than mine.
I go to sites like www.top500.org and see computers with hundreds of thousands of multi-core processors, running at the same speed as mine or faster.
These machines are usually not given jobs that take more that two weeks to finish.
They are not multi-user machines, but single user, single job, 10,000 processors or 40,000 cores, taking two weeks to finish (larger jobs not accepted).
Now if these computers with 10000 times as much power as mine, take two weeks to finish some jobs, why is it wrong for my computer, when computing pi to 750,000,000 digits, to take nearly half an hour?
It consumes nearly all of the 12GB of ram on my system to calculate it out, and with my terminal spewing as fast as it can, 45 minutes to print out the results (if I can display 13000 digits on the screen at once, thats a measly 21 screen fulls per second).
I suspect the problem might be that some jobs are just large.
I suspect buddy is talking about spread sheets and word processors and clippy (he's a microsoft guy after all).
My commodore vic20 had really great performance.
If microsoft needs to look for performance, they should look to the vic20 as a way to make their systems better.
After all, windows is only at 7, and commodore was already at 20.
Thats nearly 3 times as fast.
You can all thank me now.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563340</id>
	<title>Re:The problem: the event-driven model</title>
	<author>lennier</author>
	<datestamp>1269190200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>A big problem is the event-driven model of most user interfaces. Almost anything that needs to be done is placed on a serial event queue, which is then processed one event at a time.</p>  </div><p>This is an intriguing comment, so... In your opinion, how could we improve that? Some kind of 'event queue pipelining' feature which quickly scans events, guesses which ones might impact later ones which which ones won't, and then demultiplexes parallelisable events appropriately?</p><p>Or would it be possible to block, say, graphical update events up into large chunks and feed them into a GPU vector processor?</p><p>One thing which bugs me about the event-driven model in modern GUI frameworks is that for user code, it's next to impossible to get access to the raw event stream. You can create classes and signals/slots and register callbacks, and the framework 'does it all' for you. But I don't think these frameworks do nearly enough (or rather, they do too much; they don't allow themselves to be replaced by user code).</p><p>It seems to me that to implement parallelism, we'd need to have software components which are able to do just that: parse the core event stream and reprocess or optimise them into parallel streams. We do this on a case-by case basis in, say, databases (where the query processor will optimise the query) - is it possible to provide this as a fundamental language feature?</p><p>My impression is that we could achieve this with a very simple change: make the core event stream something like just a Lisp list. A sequence that can contain arbitrarily structured data. And then let all user code manipulate the event stream just like a stack language would manipulate the stack. Read huge blocks of it, put sub-streams into it, etc.</p><p>At the moment about all a GUI object can do is grab an event or pass it on to a parent or delegate. But if we could upgrade our code to let it send <i>events about events</i>...</p><p>Same thing with C/C++/Java having all these source-code-level modifiers and metadata markups which aren't readable or settable at runtime. I think this is a dead end; eventually all networked systems will end up as message-passing dynamic systems. All roads lead back to Lisp (or Smalltalk), but we're taking a lot of byways to get there.</p></div>
	</htmltext>
<tokenext>A big problem is the event-driven model of most user interfaces .
Almost anything that needs to be done is placed on a serial event queue , which is then processed one event at a time .
This is an intriguing comment , so... In your opinion , how could we improve that ?
Some kind of 'event queue pipelining ' feature which quickly scans events , guesses which ones might impact later ones which which ones wo n't , and then demultiplexes parallelisable events appropriately ? Or would it be possible to block , say , graphical update events up into large chunks and feed them into a GPU vector processor ? One thing which bugs me about the event-driven model in modern GUI frameworks is that for user code , it 's next to impossible to get access to the raw event stream .
You can create classes and signals/slots and register callbacks , and the framework 'does it all ' for you .
But I do n't think these frameworks do nearly enough ( or rather , they do too much ; they do n't allow themselves to be replaced by user code ) .It seems to me that to implement parallelism , we 'd need to have software components which are able to do just that : parse the core event stream and reprocess or optimise them into parallel streams .
We do this on a case-by case basis in , say , databases ( where the query processor will optimise the query ) - is it possible to provide this as a fundamental language feature ? My impression is that we could achieve this with a very simple change : make the core event stream something like just a Lisp list .
A sequence that can contain arbitrarily structured data .
And then let all user code manipulate the event stream just like a stack language would manipulate the stack .
Read huge blocks of it , put sub-streams into it , etc.At the moment about all a GUI object can do is grab an event or pass it on to a parent or delegate .
But if we could upgrade our code to let it send events about events...Same thing with C/C + + /Java having all these source-code-level modifiers and metadata markups which are n't readable or settable at runtime .
I think this is a dead end ; eventually all networked systems will end up as message-passing dynamic systems .
All roads lead back to Lisp ( or Smalltalk ) , but we 're taking a lot of byways to get there .</tokentext>
<sentencetext>A big problem is the event-driven model of most user interfaces.
Almost anything that needs to be done is placed on a serial event queue, which is then processed one event at a time.
This is an intriguing comment, so... In your opinion, how could we improve that?
Some kind of 'event queue pipelining' feature which quickly scans events, guesses which ones might impact later ones which which ones won't, and then demultiplexes parallelisable events appropriately?Or would it be possible to block, say, graphical update events up into large chunks and feed them into a GPU vector processor?One thing which bugs me about the event-driven model in modern GUI frameworks is that for user code, it's next to impossible to get access to the raw event stream.
You can create classes and signals/slots and register callbacks, and the framework 'does it all' for you.
But I don't think these frameworks do nearly enough (or rather, they do too much; they don't allow themselves to be replaced by user code).It seems to me that to implement parallelism, we'd need to have software components which are able to do just that: parse the core event stream and reprocess or optimise them into parallel streams.
We do this on a case-by case basis in, say, databases (where the query processor will optimise the query) - is it possible to provide this as a fundamental language feature?My impression is that we could achieve this with a very simple change: make the core event stream something like just a Lisp list.
A sequence that can contain arbitrarily structured data.
And then let all user code manipulate the event stream just like a stack language would manipulate the stack.
Read huge blocks of it, put sub-streams into it, etc.At the moment about all a GUI object can do is grab an event or pass it on to a parent or delegate.
But if we could upgrade our code to let it send events about events...Same thing with C/C++/Java having all these source-code-level modifiers and metadata markups which aren't readable or settable at runtime.
I think this is a dead end; eventually all networked systems will end up as message-passing dynamic systems.
All roads lead back to Lisp (or Smalltalk), but we're taking a lot of byways to get there.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561934</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564334</id>
	<title>Re:Luckily OSX is Already Has MultiCore Tech</title>
	<author>Anonymous</author>
	<datestamp>1269288960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Severely?  It's a fancy thread pool.  You posted an article saying "it's a fancy thread pool, but with a nice API and usage model". Not revolutionary.  The Java concurrency library fro Lea (if I remember) way back when brought the same thing to Java developers.  GCD is evolution, not revolution.</htmltext>
<tokenext>Severely ?
It 's a fancy thread pool .
You posted an article saying " it 's a fancy thread pool , but with a nice API and usage model " .
Not revolutionary .
The Java concurrency library fro Lea ( if I remember ) way back when brought the same thing to Java developers .
GCD is evolution , not revolution .</tokentext>
<sentencetext>Severely?
It's a fancy thread pool.
You posted an article saying "it's a fancy thread pool, but with a nice API and usage model".
Not revolutionary.
The Java concurrency library fro Lea (if I remember) way back when brought the same thing to Java developers.
GCD is evolution, not revolution.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562194</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562574</id>
	<title>Re:The problem: the event-driven model</title>
	<author>hitmark</author>
	<datestamp>1269183420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>using a mmorpg as an example vs a browser may be a bad example, as a browser have to download all data from the web, while the mmorpg have all data stored locally, and only changes in the scene is transmitted.</p><p>Now comparing something like say gmail, once the interface was fully loaded, with a mmorpg may be more correct.</p></htmltext>
<tokenext>using a mmorpg as an example vs a browser may be a bad example , as a browser have to download all data from the web , while the mmorpg have all data stored locally , and only changes in the scene is transmitted.Now comparing something like say gmail , once the interface was fully loaded , with a mmorpg may be more correct .</tokentext>
<sentencetext>using a mmorpg as an example vs a browser may be a bad example, as a browser have to download all data from the web, while the mmorpg have all data stored locally, and only changes in the scene is transmitted.Now comparing something like say gmail, once the interface was fully loaded, with a mmorpg may be more correct.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561934</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31578906</id>
	<title>Re:Duh</title>
	<author>Buelldozer</author>
	<datestamp>1269273000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The computer is at the mall?</p></htmltext>
<tokenext>The computer is at the mall ?</tokentext>
<sentencetext>The computer is at the mall?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561910</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563218</id>
	<title>Re:The problem: the event-driven model</title>
	<author>michaelmuffin</author>
	<datestamp>1269188880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Most languages still handle concurrency very badly. C and C++ are clueless about concurrency.</p></div><p>c itself isn't aware of concurrency, but so what? that doesn't mean that a program written in c can't be aware of concurrency. for example, the bell labs <a href="http://plan9.bell-labs.com/magic/man2html/2/thread" title="bell-labs.com">thread library</a> [bell-labs.com] manages threads and procs and provides channels for locking and communication between them. the syntax can get clumsy if you aren't careful, but that's the only problem i can think of</p><p><div class="quote"><p>Go [... is] intended for server-side processing.</p></div><p>what makes you think that? it's a general purpose language<br>
<br>
here's a ton of stuff that might interest you: <a href="http://swtch.com/~rsc/thread/" title="swtch.com">http://swtch.com/~rsc/thread/</a> [swtch.com]</p></div>
	</htmltext>
<tokenext>Most languages still handle concurrency very badly .
C and C + + are clueless about concurrency.c itself is n't aware of concurrency , but so what ?
that does n't mean that a program written in c ca n't be aware of concurrency .
for example , the bell labs thread library [ bell-labs.com ] manages threads and procs and provides channels for locking and communication between them .
the syntax can get clumsy if you are n't careful , but that 's the only problem i can think ofGo [ ... is ] intended for server-side processing.what makes you think that ?
it 's a general purpose language here 's a ton of stuff that might interest you : http : //swtch.com/ ~ rsc/thread/ [ swtch.com ]</tokentext>
<sentencetext>Most languages still handle concurrency very badly.
C and C++ are clueless about concurrency.c itself isn't aware of concurrency, but so what?
that doesn't mean that a program written in c can't be aware of concurrency.
for example, the bell labs thread library [bell-labs.com] manages threads and procs and provides channels for locking and communication between them.
the syntax can get clumsy if you aren't careful, but that's the only problem i can think ofGo [... is] intended for server-side processing.what makes you think that?
it's a general purpose language

here's a ton of stuff that might interest you: http://swtch.com/~rsc/thread/ [swtch.com]
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561934</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562210</id>
	<title>OSX is no more responsive than Windows</title>
	<author>judeancodersfront</author>
	<datestamp>1269180600000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>The author is talking about a complete OS redesign, not a new threading system.</htmltext>
<tokenext>The author is talking about a complete OS redesign , not a new threading system .</tokentext>
<sentencetext>The author is talking about a complete OS redesign, not a new threading system.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561554</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563228</id>
	<title>Re:Grand Central?</title>
	<author>Anonymous</author>
	<datestamp>1269189000000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>@jonwil - not sure what<nobr> <wbr></nobr>.NET has to do with this observation....?</p><p>@op - Grand Central &amp; OpenCL were added to specifically address multicore optimization of the OS and to use graphics adapters for general computing when possible (respectively).  If Windows developers/planners are just deciding to start thinking about this...They've got bigger problems than I thought...</p></htmltext>
<tokenext>@ jonwil - not sure what .NET has to do with this observation.... ?
@ op - Grand Central &amp; OpenCL were added to specifically address multicore optimization of the OS and to use graphics adapters for general computing when possible ( respectively ) .
If Windows developers/planners are just deciding to start thinking about this...They 've got bigger problems than I thought.. .</tokentext>
<sentencetext>@jonwil - not sure what .NET has to do with this observation....?
@op - Grand Central &amp; OpenCL were added to specifically address multicore optimization of the OS and to use graphics adapters for general computing when possible (respectively).
If Windows developers/planners are just deciding to start thinking about this...They've got bigger problems than I thought...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561554</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561952</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>shoehornjob</author>
	<datestamp>1269178920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>AAH you must be using Vista (the disfunctioal redneck of operating systems). Some of this was actually fixed (gasp) in Windows 7. I'm still amazed that stuff actually works. For example, If you use the search panel at the bottom of the start menu it actually finds the program you want without going through a bunch of useless menu's. This is especially helpful to me as I support a bunch of end users that don't know their computer. If you get a 404 error sometimes the "diagnose" button actually will restore your connection. I've even seen it recover from a 169. ip address (kiss of death for tech support). Anyway, get Win 7 and you'll be happy.</htmltext>
<tokenext>AAH you must be using Vista ( the disfunctioal redneck of operating systems ) .
Some of this was actually fixed ( gasp ) in Windows 7 .
I 'm still amazed that stuff actually works .
For example , If you use the search panel at the bottom of the start menu it actually finds the program you want without going through a bunch of useless menu 's .
This is especially helpful to me as I support a bunch of end users that do n't know their computer .
If you get a 404 error sometimes the " diagnose " button actually will restore your connection .
I 've even seen it recover from a 169. ip address ( kiss of death for tech support ) .
Anyway , get Win 7 and you 'll be happy .</tokentext>
<sentencetext>AAH you must be using Vista (the disfunctioal redneck of operating systems).
Some of this was actually fixed (gasp) in Windows 7.
I'm still amazed that stuff actually works.
For example, If you use the search panel at the bottom of the start menu it actually finds the program you want without going through a bunch of useless menu's.
This is especially helpful to me as I support a bunch of end users that don't know their computer.
If you get a 404 error sometimes the "diagnose" button actually will restore your connection.
I've even seen it recover from a 169. ip address (kiss of death for tech support).
Anyway, get Win 7 and you'll be happy.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31566998</id>
	<title>BeOS FTW - Thread, thread, thread</title>
	<author>Gothmolly</author>
	<datestamp>1269269220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Pervasive multithreading gives you the best user experience.  Hell, even the Linux kernel is self-preemptible to provide a faster-looking user experience.</p></htmltext>
<tokenext>Pervasive multithreading gives you the best user experience .
Hell , even the Linux kernel is self-preemptible to provide a faster-looking user experience .</tokentext>
<sentencetext>Pervasive multithreading gives you the best user experience.
Hell, even the Linux kernel is self-preemptible to provide a faster-looking user experience.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562292</id>
	<title>My experience with multicore is linux...</title>
	<author>Teunis</author>
	<datestamp>1269181260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Linux: 2 cores = 2 times faster, usually.   And so forth...    it's had scalable multicore/multiprocessor support for years.<br>Windows: extra core means 10\% of programs are faster.   So I very much confirm windows needs a rewrite for this.<br><br>Now some specific apps under linux still need work - but only if they are resource hogs.   The only programs I've run into problems with this with of late are firefox (I'm now mostly running chrome - which is far smoother), thunderbird and - possibly - the X server itself.  (I've looked into X server code though - it looks like it's now set to scale)<br><br>I can't say anything about BSD or MacOSX as I've never run either on a multicore system.   For MacOSX though I'd wager that if it isn't already up to par - it will be.   The OSX and even NextStep GUIs are designed to scale.</htmltext>
<tokenext>Linux : 2 cores = 2 times faster , usually .
And so forth... it 's had scalable multicore/multiprocessor support for years.Windows : extra core means 10 \ % of programs are faster .
So I very much confirm windows needs a rewrite for this.Now some specific apps under linux still need work - but only if they are resource hogs .
The only programs I 've run into problems with this with of late are firefox ( I 'm now mostly running chrome - which is far smoother ) , thunderbird and - possibly - the X server itself .
( I 've looked into X server code though - it looks like it 's now set to scale ) I ca n't say anything about BSD or MacOSX as I 've never run either on a multicore system .
For MacOSX though I 'd wager that if it is n't already up to par - it will be .
The OSX and even NextStep GUIs are designed to scale .</tokentext>
<sentencetext>Linux: 2 cores = 2 times faster, usually.
And so forth...    it's had scalable multicore/multiprocessor support for years.Windows: extra core means 10\% of programs are faster.
So I very much confirm windows needs a rewrite for this.Now some specific apps under linux still need work - but only if they are resource hogs.
The only programs I've run into problems with this with of late are firefox (I'm now mostly running chrome - which is far smoother), thunderbird and - possibly - the X server itself.
(I've looked into X server code though - it looks like it's now set to scale)I can't say anything about BSD or MacOSX as I've never run either on a multicore system.
For MacOSX though I'd wager that if it isn't already up to par - it will be.
The OSX and even NextStep GUIs are designed to scale.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562740</id>
	<title>Rewiring the World</title>
	<author>DannyO152</author>
	<datestamp>1269184740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Went to high school with the guy. If he remembers me, it's only as that dude every one in Chess Club could beat with time enough in a lunch period to start another match.</p><p>It seems to me that the history of software architecting has been to write things that were kinda close to correct and then punt on the final polishing until the hardware caught up. One reason why software and computers are so widely loved and respected. From the original article, I suppose the argument is being made that all the folks adding cores should take a break. However, looking at the speculation in the article, there seems also to be the suggestion that by adding more and more cores, each process may be given its own cpu. While this would allow for more simultaneous independent processes, each process still takes as long to complete. There are two things which could cause the user to press the "This Sucks*" button, their music stutters, i.e., a particular process doesn't have the resources it needs, or the user has to go to the meeting but the computer hasn't finished rendering the hand-out, i.e., a particular process requires a certain number of clock ticks.</p><p>At its heart, a processor does two things, count and compare. Feeding the jobs to the processors so there's few moments of inactivity is, I believe, an NP problem, which means the better algorithms get the right answer a lot of the time. One could do better if one could predict the length and number of jobs that will enqueue, but, being able to predict in that way seems to contradict the results from The Halting Problem. So, my educated guess would be that 100\% efficiency is an unobtainable limit. Now imagine that we are talking about a desktop and the computer is waiting for me to type, read, and type again. Could the system have devoted more resources to housekeeping during the millions and maybe billions of cycles between the events I initiate? Sure, but who's to say that the next event occurs in 0.75 seconds or 20 minutes (because I went and had a snack or got a phone call). Context switching will always have a cost and my tasks will be the ones I notice as slow or unresponsive. Operating systems are written to give me the illusion that it gets right on the things I ask.</p><p>The article suggests there's a "Field of Dreams" effect regarding hardware and that this is a problem. I remember being taught that anything one may do in hardware one may do in software and vice versa. So, it makes sense to me that hardware folks should build the best processors they can imagine and left the software people figure out utilization tactics and strategies.</p><p>That, though, leads me to the other big hairball. Legacy software. We have concurrency/parallel sensitive languages. We have semantics that have been added to existing languages so that blocks of code may be noted by the runtime and distributed to the good-enough and sometimes best choice among available processors. But, where's the business case for hiring programmers to go back, rewrite and re-debug old software to identify the blocks?  This point applies to the new wished-for architecture as well, if it existed. What are the odds that C will be the best higher level language for the bare-metal systems programming in this new hardware architecture? (What about real-world patent/marketing issues with a new architecture and, by this, I ask unless it was invented by Intel, wouldn't Intel do everything in their power to kill it?) Would it be fair to say that one problem with a pc on every desktop is that when the pc needs to be changed, it takes forever?</p><p>* A performance monitoring and adaptation mechanism suggested in the article. Which reminds me, how much of our desktops' poor performance may be laid to network latency, congestion, and the economics of bandwidth capacity?</p></htmltext>
<tokenext>Went to high school with the guy .
If he remembers me , it 's only as that dude every one in Chess Club could beat with time enough in a lunch period to start another match.It seems to me that the history of software architecting has been to write things that were kinda close to correct and then punt on the final polishing until the hardware caught up .
One reason why software and computers are so widely loved and respected .
From the original article , I suppose the argument is being made that all the folks adding cores should take a break .
However , looking at the speculation in the article , there seems also to be the suggestion that by adding more and more cores , each process may be given its own cpu .
While this would allow for more simultaneous independent processes , each process still takes as long to complete .
There are two things which could cause the user to press the " This Sucks * " button , their music stutters , i.e. , a particular process does n't have the resources it needs , or the user has to go to the meeting but the computer has n't finished rendering the hand-out , i.e. , a particular process requires a certain number of clock ticks.At its heart , a processor does two things , count and compare .
Feeding the jobs to the processors so there 's few moments of inactivity is , I believe , an NP problem , which means the better algorithms get the right answer a lot of the time .
One could do better if one could predict the length and number of jobs that will enqueue , but , being able to predict in that way seems to contradict the results from The Halting Problem .
So , my educated guess would be that 100 \ % efficiency is an unobtainable limit .
Now imagine that we are talking about a desktop and the computer is waiting for me to type , read , and type again .
Could the system have devoted more resources to housekeeping during the millions and maybe billions of cycles between the events I initiate ?
Sure , but who 's to say that the next event occurs in 0.75 seconds or 20 minutes ( because I went and had a snack or got a phone call ) .
Context switching will always have a cost and my tasks will be the ones I notice as slow or unresponsive .
Operating systems are written to give me the illusion that it gets right on the things I ask.The article suggests there 's a " Field of Dreams " effect regarding hardware and that this is a problem .
I remember being taught that anything one may do in hardware one may do in software and vice versa .
So , it makes sense to me that hardware folks should build the best processors they can imagine and left the software people figure out utilization tactics and strategies.That , though , leads me to the other big hairball .
Legacy software .
We have concurrency/parallel sensitive languages .
We have semantics that have been added to existing languages so that blocks of code may be noted by the runtime and distributed to the good-enough and sometimes best choice among available processors .
But , where 's the business case for hiring programmers to go back , rewrite and re-debug old software to identify the blocks ?
This point applies to the new wished-for architecture as well , if it existed .
What are the odds that C will be the best higher level language for the bare-metal systems programming in this new hardware architecture ?
( What about real-world patent/marketing issues with a new architecture and , by this , I ask unless it was invented by Intel , would n't Intel do everything in their power to kill it ?
) Would it be fair to say that one problem with a pc on every desktop is that when the pc needs to be changed , it takes forever ?
* A performance monitoring and adaptation mechanism suggested in the article .
Which reminds me , how much of our desktops ' poor performance may be laid to network latency , congestion , and the economics of bandwidth capacity ?</tokentext>
<sentencetext>Went to high school with the guy.
If he remembers me, it's only as that dude every one in Chess Club could beat with time enough in a lunch period to start another match.It seems to me that the history of software architecting has been to write things that were kinda close to correct and then punt on the final polishing until the hardware caught up.
One reason why software and computers are so widely loved and respected.
From the original article, I suppose the argument is being made that all the folks adding cores should take a break.
However, looking at the speculation in the article, there seems also to be the suggestion that by adding more and more cores, each process may be given its own cpu.
While this would allow for more simultaneous independent processes, each process still takes as long to complete.
There are two things which could cause the user to press the "This Sucks*" button, their music stutters, i.e., a particular process doesn't have the resources it needs, or the user has to go to the meeting but the computer hasn't finished rendering the hand-out, i.e., a particular process requires a certain number of clock ticks.At its heart, a processor does two things, count and compare.
Feeding the jobs to the processors so there's few moments of inactivity is, I believe, an NP problem, which means the better algorithms get the right answer a lot of the time.
One could do better if one could predict the length and number of jobs that will enqueue, but, being able to predict in that way seems to contradict the results from The Halting Problem.
So, my educated guess would be that 100\% efficiency is an unobtainable limit.
Now imagine that we are talking about a desktop and the computer is waiting for me to type, read, and type again.
Could the system have devoted more resources to housekeeping during the millions and maybe billions of cycles between the events I initiate?
Sure, but who's to say that the next event occurs in 0.75 seconds or 20 minutes (because I went and had a snack or got a phone call).
Context switching will always have a cost and my tasks will be the ones I notice as slow or unresponsive.
Operating systems are written to give me the illusion that it gets right on the things I ask.The article suggests there's a "Field of Dreams" effect regarding hardware and that this is a problem.
I remember being taught that anything one may do in hardware one may do in software and vice versa.
So, it makes sense to me that hardware folks should build the best processors they can imagine and left the software people figure out utilization tactics and strategies.That, though, leads me to the other big hairball.
Legacy software.
We have concurrency/parallel sensitive languages.
We have semantics that have been added to existing languages so that blocks of code may be noted by the runtime and distributed to the good-enough and sometimes best choice among available processors.
But, where's the business case for hiring programmers to go back, rewrite and re-debug old software to identify the blocks?
This point applies to the new wished-for architecture as well, if it existed.
What are the odds that C will be the best higher level language for the bare-metal systems programming in this new hardware architecture?
(What about real-world patent/marketing issues with a new architecture and, by this, I ask unless it was invented by Intel, wouldn't Intel do everything in their power to kill it?
) Would it be fair to say that one problem with a pc on every desktop is that when the pc needs to be changed, it takes forever?
* A performance monitoring and adaptation mechanism suggested in the article.
Which reminds me, how much of our desktops' poor performance may be laid to network latency, congestion, and the economics of bandwidth capacity?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31569202</id>
	<title>Re:Dumb programmers</title>
	<author>Reservoir Penguin</author>
	<datestamp>1269274920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>And this is why free software developers with no managers looming over them create such great, responsive, multi-core aware desktop applications, right?  What a joke, and a poor excuse for your laziness.</htmltext>
<tokenext>And this is why free software developers with no managers looming over them create such great , responsive , multi-core aware desktop applications , right ?
What a joke , and a poor excuse for your laziness .</tokentext>
<sentencetext>And this is why free software developers with no managers looming over them create such great, responsive, multi-core aware desktop applications, right?
What a joke, and a poor excuse for your laziness.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561676</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561884</id>
	<title>BeOS was doing it...</title>
	<author>Anonymous</author>
	<datestamp>1269178560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>BeOS was working on this well before multicore CPUs were the norm on the desktop and the level of responsiveness they managed on hardware that is stone-age tech by today's standard was extremely impressive.  Haiku will be picking up where BeOS left off, but it's got a lot of catching up to do on the details, big and small, to become an everyday user's system.</p><p>Yet another innovative player that Microsoft extinguished, and the whole tech world is worse off because of it.<nobr> <wbr></nobr>:(</p></htmltext>
<tokenext>BeOS was working on this well before multicore CPUs were the norm on the desktop and the level of responsiveness they managed on hardware that is stone-age tech by today 's standard was extremely impressive .
Haiku will be picking up where BeOS left off , but it 's got a lot of catching up to do on the details , big and small , to become an everyday user 's system.Yet another innovative player that Microsoft extinguished , and the whole tech world is worse off because of it .
: (</tokentext>
<sentencetext>BeOS was working on this well before multicore CPUs were the norm on the desktop and the level of responsiveness they managed on hardware that is stone-age tech by today's standard was extremely impressive.
Haiku will be picking up where BeOS left off, but it's got a lot of catching up to do on the details, big and small, to become an everyday user's system.Yet another innovative player that Microsoft extinguished, and the whole tech world is worse off because of it.
:(</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31574636</id>
	<title>Why you have to wait</title>
	<author>jgrahn</author>
	<datestamp>1269249360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>'Why should you ever, with all this parallel hardware, ever be waiting for your computer?' [Probert] asks.</p></div></blockquote><p>
I haven't read TFA, but quoted like that it looks pretty daft.
</p><ul>
<li>Because disk and network I/O is slower than your CPU.</li>
<li>Because if you *never* have to wait, you have an overpowered computer.</li>
</ul></div>
	</htmltext>
<tokenext>'Why should you ever , with all this parallel hardware , ever be waiting for your computer ?
' [ Probert ] asks .
I have n't read TFA , but quoted like that it looks pretty daft .
Because disk and network I/O is slower than your CPU .
Because if you * never * have to wait , you have an overpowered computer .</tokentext>
<sentencetext>'Why should you ever, with all this parallel hardware, ever be waiting for your computer?
' [Probert] asks.
I haven't read TFA, but quoted like that it looks pretty daft.
Because disk and network I/O is slower than your CPU.
Because if you *never* have to wait, you have an overpowered computer.

	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561798</id>
	<title>4096 processors not enough?</title>
	<author>macemoneta</author>
	<datestamp>1269177900000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>The largest single system image I'm aware of runs Linux on a <a href="https://docs.google.com/viewer?url=http://www.sgi.com/pdfs/4007.pdf" title="google.com" rel="nofollow">4096 processor SGI machine with 17TB RAM</a> [google.com].  Maybe He means that Windows needs rework?</p></htmltext>
<tokenext>The largest single system image I 'm aware of runs Linux on a 4096 processor SGI machine with 17TB RAM [ google.com ] .
Maybe He means that Windows needs rework ?</tokentext>
<sentencetext>The largest single system image I'm aware of runs Linux on a 4096 processor SGI machine with 17TB RAM [google.com].
Maybe He means that Windows needs rework?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31574156</id>
	<title>I really don't wait on my processor</title>
	<author>Anonymous</author>
	<datestamp>1269290940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><blockquote><div><p> <tt>'Why should you ever, with all this parallel hardware, ever be waiting for your computer?' he asked.</tt></p></div> </blockquote><p>I'm on a core2 duo, and the only time I ever wait on my processor is when I'm playing a game or when I'm using keepass, because my db is reencrypted I think 1000 times.</p><p>I do a lot of waiting though, but it's waiting on my hard drive. Don't think multicore is going to do much about that.</p></div>
	</htmltext>
<tokenext>'Why should you ever , with all this parallel hardware , ever be waiting for your computer ?
' he asked .
I 'm on a core2 duo , and the only time I ever wait on my processor is when I 'm playing a game or when I 'm using keepass , because my db is reencrypted I think 1000 times.I do a lot of waiting though , but it 's waiting on my hard drive .
Do n't think multicore is going to do much about that .</tokentext>
<sentencetext> 'Why should you ever, with all this parallel hardware, ever be waiting for your computer?
' he asked.
I'm on a core2 duo, and the only time I ever wait on my processor is when I'm playing a game or when I'm using keepass, because my db is reencrypted I think 1000 times.I do a lot of waiting though, but it's waiting on my hard drive.
Don't think multicore is going to do much about that.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562764</id>
	<title>Put up or shut up.</title>
	<author>Anonymous</author>
	<datestamp>1269184920000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>I'm getting really sick of posting this, but I'll continue to do so until you do. </p><p><b>BUILD A WORKING PROTOTYPE OF THIS "UNIVERSAL BEHAVING MACHINE", OR SHUT THE HELL UP</b>. </p><p>Those of us who aren't insane aren't impressed by talk, we're impressed by results. If you spend half as much effort building the thing as you do flapping your damn jaw, you'd be done by now. </p><p><tt>(For any uninitiated mods, this fellow is slashdot poster "rebelscience", and maintains a website of the same name. Every time a multiprocessing-related thread comes up, he posts this tripe but has never actually done anything about it. Visit his website, and you'll see why I call him a lunatic)</tt></p></htmltext>
<tokenext>I 'm getting really sick of posting this , but I 'll continue to do so until you do .
BUILD A WORKING PROTOTYPE OF THIS " UNIVERSAL BEHAVING MACHINE " , OR SHUT THE HELL UP .
Those of us who are n't insane are n't impressed by talk , we 're impressed by results .
If you spend half as much effort building the thing as you do flapping your damn jaw , you 'd be done by now .
( For any uninitiated mods , this fellow is slashdot poster " rebelscience " , and maintains a website of the same name .
Every time a multiprocessing-related thread comes up , he posts this tripe but has never actually done anything about it .
Visit his website , and you 'll see why I call him a lunatic )</tokentext>
<sentencetext>I'm getting really sick of posting this, but I'll continue to do so until you do.
BUILD A WORKING PROTOTYPE OF THIS "UNIVERSAL BEHAVING MACHINE", OR SHUT THE HELL UP.
Those of us who aren't insane aren't impressed by talk, we're impressed by results.
If you spend half as much effort building the thing as you do flapping your damn jaw, you'd be done by now.
(For any uninitiated mods, this fellow is slashdot poster "rebelscience", and maintains a website of the same name.
Every time a multiprocessing-related thread comes up, he posts this tripe but has never actually done anything about it.
Visit his website, and you'll see why I call him a lunatic)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561816</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564548</id>
	<title>Re:Dumb programmers</title>
	<author>Anonymous</author>
	<datestamp>1269249420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Other than the waiting, how is it living in 1997?</p></htmltext>
<tokenext>Other than the waiting , how is it living in 1997 ?</tokentext>
<sentencetext>Other than the waiting, how is it living in 1997?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561578</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562626</id>
	<title>Re:Multithreading is the problem, not the answer</title>
	<author>DaGoodBoy</author>
	<datestamp>1269183840000</datestamp>
	<modclass>Troll</modclass>
	<modscore>0</modscore>
	<htmltext><p>Agreed. There are certain classes of problems for which threads provide an elegant solution, but it is not the answer for every problem. The same with Object Oriented techniques; they really help in some cases. Unfortunately, there is a tendency in our industry to treat whatever this years popular tool or developmental concept as some kind of panacea and everything that has gone before as some kind of remedial solution for technological dinosaurs.</p><p>The truth is less cut and dried. UNIX philosophy still applies (small, discrete applications; clean interfaces; in separate process spaces), despite the inherited, object oriented model being in vogue. Threads are good for the kinds of parallel problems they solve, but you can't beat an straight-forward event loop for asynchronous performance and lack of obscure timing issues. Sometimes you just need an old fashioned FIFO for IPC, rather than some kind of sophisticated OS managed queuing system.</p><p>I'm old, but I've seen a lot. Most problems I've found in software development design / architecture is someone with a degree using the latest college-taught solutions to solve real-world problems and inadvertently making them almost impossible to solve.</p></htmltext>
<tokenext>Agreed .
There are certain classes of problems for which threads provide an elegant solution , but it is not the answer for every problem .
The same with Object Oriented techniques ; they really help in some cases .
Unfortunately , there is a tendency in our industry to treat whatever this years popular tool or developmental concept as some kind of panacea and everything that has gone before as some kind of remedial solution for technological dinosaurs.The truth is less cut and dried .
UNIX philosophy still applies ( small , discrete applications ; clean interfaces ; in separate process spaces ) , despite the inherited , object oriented model being in vogue .
Threads are good for the kinds of parallel problems they solve , but you ca n't beat an straight-forward event loop for asynchronous performance and lack of obscure timing issues .
Sometimes you just need an old fashioned FIFO for IPC , rather than some kind of sophisticated OS managed queuing system.I 'm old , but I 've seen a lot .
Most problems I 've found in software development design / architecture is someone with a degree using the latest college-taught solutions to solve real-world problems and inadvertently making them almost impossible to solve .</tokentext>
<sentencetext>Agreed.
There are certain classes of problems for which threads provide an elegant solution, but it is not the answer for every problem.
The same with Object Oriented techniques; they really help in some cases.
Unfortunately, there is a tendency in our industry to treat whatever this years popular tool or developmental concept as some kind of panacea and everything that has gone before as some kind of remedial solution for technological dinosaurs.The truth is less cut and dried.
UNIX philosophy still applies (small, discrete applications; clean interfaces; in separate process spaces), despite the inherited, object oriented model being in vogue.
Threads are good for the kinds of parallel problems they solve, but you can't beat an straight-forward event loop for asynchronous performance and lack of obscure timing issues.
Sometimes you just need an old fashioned FIFO for IPC, rather than some kind of sophisticated OS managed queuing system.I'm old, but I've seen a lot.
Most problems I've found in software development design / architecture is someone with a degree using the latest college-taught solutions to solve real-world problems and inadvertently making them almost impossible to solve.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561816</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563158</id>
	<title>Re:Multithreading is the problem, not the answer</title>
	<author>ceoyoyo</author>
	<datestamp>1269188340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Meh.  The thread vs. process crusaders get all uptight.  In reality both have their advantages and can be used almost interchangeably.  Other arguments against threads usually boil down to not actually letting the programmer play with the threads directly, but abstracting them away in some form (which is not a bad idea, but is also not as revolutionary as some would like to think).</p><p>If you've got multiple processors you need some way of parceling out work to them so any parallel processing machine is going to have something like threads or processes at it's basic level, just like any serial machine has instructions.  Should crappy programmers be allowed to play with threads directly?  Probably not, just like they shouldn't be allowed to program in assembly or use pointers.</p></htmltext>
<tokenext>Meh .
The thread vs. process crusaders get all uptight .
In reality both have their advantages and can be used almost interchangeably .
Other arguments against threads usually boil down to not actually letting the programmer play with the threads directly , but abstracting them away in some form ( which is not a bad idea , but is also not as revolutionary as some would like to think ) .If you 've got multiple processors you need some way of parceling out work to them so any parallel processing machine is going to have something like threads or processes at it 's basic level , just like any serial machine has instructions .
Should crappy programmers be allowed to play with threads directly ?
Probably not , just like they should n't be allowed to program in assembly or use pointers .</tokentext>
<sentencetext>Meh.
The thread vs. process crusaders get all uptight.
In reality both have their advantages and can be used almost interchangeably.
Other arguments against threads usually boil down to not actually letting the programmer play with the threads directly, but abstracting them away in some form (which is not a bad idea, but is also not as revolutionary as some would like to think).If you've got multiple processors you need some way of parceling out work to them so any parallel processing machine is going to have something like threads or processes at it's basic level, just like any serial machine has instructions.
Should crappy programmers be allowed to play with threads directly?
Probably not, just like they shouldn't be allowed to program in assembly or use pointers.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561816</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564264</id>
	<title>Re:Luckily OSX is Already Has MultiCore Tech</title>
	<author>IamTheRealMike</author>
	<datestamp>1269201360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>What apple calls "blocks" are what other languages have called "closures" and had for decades. Adding closures to Objective-C isn't an interesting advance, and if Siracusa believes that's what makes GCD revolutionary I can only imagine he needs to spend less time writing articles and more time writing or debugging multi-threaded code.</htmltext>
<tokenext>What apple calls " blocks " are what other languages have called " closures " and had for decades .
Adding closures to Objective-C is n't an interesting advance , and if Siracusa believes that 's what makes GCD revolutionary I can only imagine he needs to spend less time writing articles and more time writing or debugging multi-threaded code .</tokentext>
<sentencetext>What apple calls "blocks" are what other languages have called "closures" and had for decades.
Adding closures to Objective-C isn't an interesting advance, and if Siracusa believes that's what makes GCD revolutionary I can only imagine he needs to spend less time writing articles and more time writing or debugging multi-threaded code.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562194</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562166</id>
	<title>Re:Grand Central?</title>
	<author>shutdown -p now</author>
	<datestamp>1269180420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Microsoft has its own offering similar to Apple's GCD - <a href="http://www.nuonsoft.com/blog/2009/06/12/parallel-pattern-library-in-visual-c-2010/" title="nuonsoft.com">Parallel Patterns Library</a> [nuonsoft.com] for C++. It's mostly same primitives (tasks &amp; groups of tasks) on lower level, though it also offers a few simple STL-like algorithms with automatic parallelization.</p><p>For<nobr> <wbr></nobr>.NET, the same task primitives are offered, and then there's <a href="http://msdn.microsoft.com/en-us/magazine/cc163329.aspx" title="microsoft.com">Parallel LINQ</a> [microsoft.com] on top of that, which is effectively automatic parallelization of queries over sequences, with all the typical operations - map, reduce, filter, group, join, order - supported.</p></htmltext>
<tokenext>Microsoft has its own offering similar to Apple 's GCD - Parallel Patterns Library [ nuonsoft.com ] for C + + .
It 's mostly same primitives ( tasks &amp; groups of tasks ) on lower level , though it also offers a few simple STL-like algorithms with automatic parallelization.For .NET , the same task primitives are offered , and then there 's Parallel LINQ [ microsoft.com ] on top of that , which is effectively automatic parallelization of queries over sequences , with all the typical operations - map , reduce , filter , group , join , order - supported .</tokentext>
<sentencetext>Microsoft has its own offering similar to Apple's GCD - Parallel Patterns Library [nuonsoft.com] for C++.
It's mostly same primitives (tasks &amp; groups of tasks) on lower level, though it also offers a few simple STL-like algorithms with automatic parallelization.For .NET, the same task primitives are offered, and then there's Parallel LINQ [microsoft.com] on top of that, which is effectively automatic parallelization of queries over sequences, with all the typical operations - map, reduce, filter, group, join, order - supported.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561554</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563144</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>Anonymous</author>
	<datestamp>1269188220000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>Fixed in Vista and 7, you can ignore errors and continue copying.</p></htmltext>
<tokenext>Fixed in Vista and 7 , you can ignore errors and continue copying .</tokentext>
<sentencetext>Fixed in Vista and 7, you can ignore errors and continue copying.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561772</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562536</id>
	<title>Reminds me of the Cache Kernel.</title>
	<author>Grenamier</author>
	<datestamp>1269183180000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>The part of the article where Probert discusses the operating system becoming something like a hypervisor reminds me of the Cache Kernel from a Stanford University paper back in 1994. <a href="http://www-dsg.stanford.edu/papers/cachekernel/main.html" title="stanford.edu">http://www-dsg.stanford.edu/papers/cachekernel/main.html</a> [stanford.edu]</p><p>The way I understand it, the cache kernel in kernel mode doesn't really have built-in policy for traditional OS tasks like scheduing or resource management. It just serves as a cache for loading and unloading for things like addresses spaces and threads and making them active. The policy for working with these things comes from separate application kernels in user mode and kernel objects that are loaded by the cache kernel.</p><p>There's also a 1997 MIT paper on exokernels (http://pdos.csail.mit.edu/papers/exo-sosp97/exo-sosp97.html). The idea is separating the responsibility of management from the responsibility of protection. The exokernel knows how to protect resources and the application knows how to make them sing. In the paper, they build a webserver on this architecture and it performs very well.</p><p>Both of these papers have research operating systems that demonstate specialized "native" applications running alongside unmodified UNIX applications running on UNIX emulators. That would suggest rebuilding an operating system in one of these styles wouldn't entail throwing out all the existing software or immediately forcing a new programming model on developers who aren't ready.</p><p>Microsoft used to talk about "personalities" in NT. It had subsystems for OS/2 1.x, WIn16, and Win32 that would allow apps from OS/2 (character mode), Windows 3.1 and Windows NT running as peers on top of the NT kernel. Perhaps someday the subsystems come back, some as OS personalities running traditional apps, and some as whole applications with resource management policy in their own right. Notepad might just run on the Win32 subsystem, but Photoshop might be interested in managing its own memory as well as disk space.</p><p>The mid-90s were fun for OS research, weren't they?<nobr> <wbr></nobr>:)</p></htmltext>
<tokenext>The part of the article where Probert discusses the operating system becoming something like a hypervisor reminds me of the Cache Kernel from a Stanford University paper back in 1994. http : //www-dsg.stanford.edu/papers/cachekernel/main.html [ stanford.edu ] The way I understand it , the cache kernel in kernel mode does n't really have built-in policy for traditional OS tasks like scheduing or resource management .
It just serves as a cache for loading and unloading for things like addresses spaces and threads and making them active .
The policy for working with these things comes from separate application kernels in user mode and kernel objects that are loaded by the cache kernel.There 's also a 1997 MIT paper on exokernels ( http : //pdos.csail.mit.edu/papers/exo-sosp97/exo-sosp97.html ) .
The idea is separating the responsibility of management from the responsibility of protection .
The exokernel knows how to protect resources and the application knows how to make them sing .
In the paper , they build a webserver on this architecture and it performs very well.Both of these papers have research operating systems that demonstate specialized " native " applications running alongside unmodified UNIX applications running on UNIX emulators .
That would suggest rebuilding an operating system in one of these styles would n't entail throwing out all the existing software or immediately forcing a new programming model on developers who are n't ready.Microsoft used to talk about " personalities " in NT .
It had subsystems for OS/2 1.x , WIn16 , and Win32 that would allow apps from OS/2 ( character mode ) , Windows 3.1 and Windows NT running as peers on top of the NT kernel .
Perhaps someday the subsystems come back , some as OS personalities running traditional apps , and some as whole applications with resource management policy in their own right .
Notepad might just run on the Win32 subsystem , but Photoshop might be interested in managing its own memory as well as disk space.The mid-90s were fun for OS research , were n't they ?
: )</tokentext>
<sentencetext>The part of the article where Probert discusses the operating system becoming something like a hypervisor reminds me of the Cache Kernel from a Stanford University paper back in 1994. http://www-dsg.stanford.edu/papers/cachekernel/main.html [stanford.edu]The way I understand it, the cache kernel in kernel mode doesn't really have built-in policy for traditional OS tasks like scheduing or resource management.
It just serves as a cache for loading and unloading for things like addresses spaces and threads and making them active.
The policy for working with these things comes from separate application kernels in user mode and kernel objects that are loaded by the cache kernel.There's also a 1997 MIT paper on exokernels (http://pdos.csail.mit.edu/papers/exo-sosp97/exo-sosp97.html).
The idea is separating the responsibility of management from the responsibility of protection.
The exokernel knows how to protect resources and the application knows how to make them sing.
In the paper, they build a webserver on this architecture and it performs very well.Both of these papers have research operating systems that demonstate specialized "native" applications running alongside unmodified UNIX applications running on UNIX emulators.
That would suggest rebuilding an operating system in one of these styles wouldn't entail throwing out all the existing software or immediately forcing a new programming model on developers who aren't ready.Microsoft used to talk about "personalities" in NT.
It had subsystems for OS/2 1.x, WIn16, and Win32 that would allow apps from OS/2 (character mode), Windows 3.1 and Windows NT running as peers on top of the NT kernel.
Perhaps someday the subsystems come back, some as OS personalities running traditional apps, and some as whole applications with resource management policy in their own right.
Notepad might just run on the Win32 subsystem, but Photoshop might be interested in managing its own memory as well as disk space.The mid-90s were fun for OS research, weren't they?
:)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561530</id>
	<title>I hate to say it, but...</title>
	<author>Anonymous</author>
	<datestamp>1269175980000</datestamp>
	<modclass>Troll</modclass>
	<modscore>0</modscore>
	<htmltext>...I do a lot more waiting on my XP machine than on my Mac.  Almost identical hardware, but when I'm opening an XLS file, Outlook and Word grind to a halt on the PC.  Sometimes, closing a window locks up the whole system for 30 seconds.  Shutting down takes an eternity, but the only thing worse than that is how slow the system gets after I leave it running for more than 4 days straight.
<br> <br>
My Mac, on the other hand, can stay running for months at a time, and maybe once a month I have to force quit an application.  But even then, it's to access that application, not anything else.</htmltext>
<tokenext>...I do a lot more waiting on my XP machine than on my Mac .
Almost identical hardware , but when I 'm opening an XLS file , Outlook and Word grind to a halt on the PC .
Sometimes , closing a window locks up the whole system for 30 seconds .
Shutting down takes an eternity , but the only thing worse than that is how slow the system gets after I leave it running for more than 4 days straight .
My Mac , on the other hand , can stay running for months at a time , and maybe once a month I have to force quit an application .
But even then , it 's to access that application , not anything else .</tokentext>
<sentencetext>...I do a lot more waiting on my XP machine than on my Mac.
Almost identical hardware, but when I'm opening an XLS file, Outlook and Word grind to a halt on the PC.
Sometimes, closing a window locks up the whole system for 30 seconds.
Shutting down takes an eternity, but the only thing worse than that is how slow the system gets after I leave it running for more than 4 days straight.
My Mac, on the other hand, can stay running for months at a time, and maybe once a month I have to force quit an application.
But even then, it's to access that application, not anything else.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564374</id>
	<title>Re:4096 processors not enough?</title>
	<author>Anonymous</author>
	<datestamp>1269289500000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>"- Written for distributed memory architectures (MPI codes),allowing users access to up to 4,000 processors</p><p>- Written for shared-memory architectures (OpenMP<br>codes), with up to 256 processors accessible today in shared-memory mode,"</p><p>That's from the very paper you linked. If I am understanding this correctly, the system only scales up to 256 processors. The 4000 number is really a distributed cluster. Windows already scales to 256 cores on a single box. Just thought you might want to know.</p></htmltext>
<tokenext>" - Written for distributed memory architectures ( MPI codes ) ,allowing users access to up to 4,000 processors- Written for shared-memory architectures ( OpenMPcodes ) , with up to 256 processors accessible today in shared-memory mode , " That 's from the very paper you linked .
If I am understanding this correctly , the system only scales up to 256 processors .
The 4000 number is really a distributed cluster .
Windows already scales to 256 cores on a single box .
Just thought you might want to know .</tokentext>
<sentencetext>"- Written for distributed memory architectures (MPI codes),allowing users access to up to 4,000 processors- Written for shared-memory architectures (OpenMPcodes), with up to 256 processors accessible today in shared-memory mode,"That's from the very paper you linked.
If I am understanding this correctly, the system only scales up to 256 processors.
The 4000 number is really a distributed cluster.
Windows already scales to 256 cores on a single box.
Just thought you might want to know.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561798</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31565480</id>
	<title>BeOS</title>
	<author>Anonymous</author>
	<datestamp>1269264060000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>BeOS!</p><p>But you killed it, so apologise for that first.</p></htmltext>
<tokenext>BeOS ! But you killed it , so apologise for that first .</tokentext>
<sentencetext>BeOS!But you killed it, so apologise for that first.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563830</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>Anonymous</author>
	<datestamp>1269194640000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Windows Explorer no longer kills network transfers after a failure as of Windows Vista.</p><p>Maybe some of the people complaining about Windows should stop using a version thats 9 years old (XP). Red Hat 7.2 isn't particularly great by today's standards either.</p></htmltext>
<tokenext>Windows Explorer no longer kills network transfers after a failure as of Windows Vista.Maybe some of the people complaining about Windows should stop using a version thats 9 years old ( XP ) .
Red Hat 7.2 is n't particularly great by today 's standards either .</tokentext>
<sentencetext>Windows Explorer no longer kills network transfers after a failure as of Windows Vista.Maybe some of the people complaining about Windows should stop using a version thats 9 years old (XP).
Red Hat 7.2 isn't particularly great by today's standards either.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561772</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561554</id>
	<title>Grand Central?</title>
	<author>volfreak</author>
	<datestamp>1269176100000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext>Isn't this the reason for Apple to have rolled out GrandCentral in Snow Leopard?  If so, it seems it's not THAT hard to do - at least not that hard for a non-Windows OS.</htmltext>
<tokenext>Is n't this the reason for Apple to have rolled out GrandCentral in Snow Leopard ?
If so , it seems it 's not THAT hard to do - at least not that hard for a non-Windows OS .</tokentext>
<sentencetext>Isn't this the reason for Apple to have rolled out GrandCentral in Snow Leopard?
If so, it seems it's not THAT hard to do - at least not that hard for a non-Windows OS.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562784</id>
	<title>Re:The way computers operate is to blame</title>
	<author>ZosX</author>
	<datestamp>1269185040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I don't think you can get away from this mode of operation. The way a computer's internals are stacked is basically vertical with the processor on top. Between the processor you have the BUS and then the RAM and the I/O that feeds to the BIOS and the rest of the board. With RAM and FSB speeds exceeding 1ghz you still have a sizable barrier when talking to a 2-3ghz processor. Clearly part of the solution is to dump all kinds of cache next to the CPU (L1), but that costs a great deal and adds to the silicon used. My processor has something like 512kb per core. I think the newer athlons have 1-2mb per core. Compared to a 486 at 16k that is a huge increase in cache. Its not really the computer, nor is it the way SMP is designed to work. The problem is really the programming. A lot of tasks just don't scale well. The bulk of your processing in a data intensive task is still going to fall to some thread that simply can't be split easily. Tasks that you can break into workable chunks are ideal for parallelism. Video encoding....file compression....image processing....data sorting....but there are so many things that are hard to effectively multithread. Rendering HTML for instance. Sure you can throw each tab in a thread, and that would be a nice improvement, but at the same time I don't see how you could effectively thread out a render. Software will eventually start taking advantage of more cores and then some of the inherent problems in how smp is implemented will eventually rear their ugly heads (which is what I *think* the article is about....like I RTFA or anything LOL) but I think we have to get beyond quad cores and start seeing software that actually starts utilizing more than one core effectively. I have a quad core I use at work and most of the time only a single cpu or maybe two is ever pegged. I quite honestly can't think of what I would need an 8 core machine for and I use photoshop all day which should be software that is fairly demanding. The 2 cores in my laptop really feel like more than enough for just about any task I can throw at it. Coming from years and years of single cpu machines, having 2+ cores certainly makes things a lot more snappier.</p></htmltext>
<tokenext>I do n't think you can get away from this mode of operation .
The way a computer 's internals are stacked is basically vertical with the processor on top .
Between the processor you have the BUS and then the RAM and the I/O that feeds to the BIOS and the rest of the board .
With RAM and FSB speeds exceeding 1ghz you still have a sizable barrier when talking to a 2-3ghz processor .
Clearly part of the solution is to dump all kinds of cache next to the CPU ( L1 ) , but that costs a great deal and adds to the silicon used .
My processor has something like 512kb per core .
I think the newer athlons have 1-2mb per core .
Compared to a 486 at 16k that is a huge increase in cache .
Its not really the computer , nor is it the way SMP is designed to work .
The problem is really the programming .
A lot of tasks just do n't scale well .
The bulk of your processing in a data intensive task is still going to fall to some thread that simply ca n't be split easily .
Tasks that you can break into workable chunks are ideal for parallelism .
Video encoding....file compression....image processing....data sorting....but there are so many things that are hard to effectively multithread .
Rendering HTML for instance .
Sure you can throw each tab in a thread , and that would be a nice improvement , but at the same time I do n't see how you could effectively thread out a render .
Software will eventually start taking advantage of more cores and then some of the inherent problems in how smp is implemented will eventually rear their ugly heads ( which is what I * think * the article is about....like I RTFA or anything LOL ) but I think we have to get beyond quad cores and start seeing software that actually starts utilizing more than one core effectively .
I have a quad core I use at work and most of the time only a single cpu or maybe two is ever pegged .
I quite honestly ca n't think of what I would need an 8 core machine for and I use photoshop all day which should be software that is fairly demanding .
The 2 cores in my laptop really feel like more than enough for just about any task I can throw at it .
Coming from years and years of single cpu machines , having 2 + cores certainly makes things a lot more snappier .</tokentext>
<sentencetext>I don't think you can get away from this mode of operation.
The way a computer's internals are stacked is basically vertical with the processor on top.
Between the processor you have the BUS and then the RAM and the I/O that feeds to the BIOS and the rest of the board.
With RAM and FSB speeds exceeding 1ghz you still have a sizable barrier when talking to a 2-3ghz processor.
Clearly part of the solution is to dump all kinds of cache next to the CPU (L1), but that costs a great deal and adds to the silicon used.
My processor has something like 512kb per core.
I think the newer athlons have 1-2mb per core.
Compared to a 486 at 16k that is a huge increase in cache.
Its not really the computer, nor is it the way SMP is designed to work.
The problem is really the programming.
A lot of tasks just don't scale well.
The bulk of your processing in a data intensive task is still going to fall to some thread that simply can't be split easily.
Tasks that you can break into workable chunks are ideal for parallelism.
Video encoding....file compression....image processing....data sorting....but there are so many things that are hard to effectively multithread.
Rendering HTML for instance.
Sure you can throw each tab in a thread, and that would be a nice improvement, but at the same time I don't see how you could effectively thread out a render.
Software will eventually start taking advantage of more cores and then some of the inherent problems in how smp is implemented will eventually rear their ugly heads (which is what I *think* the article is about....like I RTFA or anything LOL) but I think we have to get beyond quad cores and start seeing software that actually starts utilizing more than one core effectively.
I have a quad core I use at work and most of the time only a single cpu or maybe two is ever pegged.
I quite honestly can't think of what I would need an 8 core machine for and I use photoshop all day which should be software that is fairly demanding.
The 2 cores in my laptop really feel like more than enough for just about any task I can throw at it.
Coming from years and years of single cpu machines, having 2+ cores certainly makes things a lot more snappier.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562014</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561578</id>
	<title>Dumb programmers</title>
	<author>Anonymous</author>
	<datestamp>1269176220000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>You wait because some programmer thought it was more important to have animated menus than a fast algorithm. You wait because someone was told "computers have lots of disk space." You wait because the engineers never tested their database on a large enough scale. You wait because programmers today are taught to write everything themselves, and to simply expect new hardware to make their mistakes irrelevant.</htmltext>
<tokenext>You wait because some programmer thought it was more important to have animated menus than a fast algorithm .
You wait because someone was told " computers have lots of disk space .
" You wait because the engineers never tested their database on a large enough scale .
You wait because programmers today are taught to write everything themselves , and to simply expect new hardware to make their mistakes irrelevant .</tokentext>
<sentencetext>You wait because some programmer thought it was more important to have animated menus than a fast algorithm.
You wait because someone was told "computers have lots of disk space.
" You wait because the engineers never tested their database on a large enough scale.
You wait because programmers today are taught to write everything themselves, and to simply expect new hardware to make their mistakes irrelevant.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31566878</id>
	<title>Re:Current architecture flawed but workable BUT...</title>
	<author>rozz</author>
	<datestamp>1269268920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>solutions for the copy/move problems:
<br>- teracopy: 3rd party, nice and neat
<br>- robocopy: MS app, kinda clunky but better than nothing
<br>too lazy to add links, just google</htmltext>
<tokenext>solutions for the copy/move problems : - teracopy : 3rd party , nice and neat - robocopy : MS app , kinda clunky but better than nothing too lazy to add links , just google</tokentext>
<sentencetext>solutions for the copy/move problems:
- teracopy: 3rd party, nice and neat
- robocopy: MS app, kinda clunky but better than nothing
too lazy to add links, just google</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31568602</id>
	<title>Apple has allready done it..</title>
	<author>juasko</author>
	<datestamp>1269273360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>And It will be implemented if not allready in freeBSD... after that maybe Linux will follow if that group of people understand the significanse of the technology.

Under OSX 10.6.x it's called Grand Central Dispatch. And is opensourced under a different namne libdispatch, it utilise block that has been sent for standardisation and implemented int C, C++, and objC. Gcc compiler supports blocks it's just upp to you developers to utilize it.

Automatic muliti threding, opimiates automatically for the hardware the user uses without compilation, all is done. Only thing that the developer needs to do is to state what parts of their code should be mulit thredded. The system desides amount of threads, syncronisation and all the rest.

Programming for the rest of us.

Souns likte too good to be true, an Apple fanboys geek talk or trolling. Why dont you check it out yourselfes.

MS. could benefit from the blocks, but still need to implement their own GCD or opensource their kernel, If i understand the Apache license correctly which I havn't read myself.

Apple opensource : <a href="http://opensource.apple.com/source/libdispatch/libdispatch-84.5.3/" title="apple.com" rel="nofollow">http://opensource.apple.com/source/libdispatch/libdispatch-84.5.3/</a> [apple.com]
Apple white paper on GCD: <a href="http://www.apple.com/macosx/technology/#grandcentral" title="apple.com" rel="nofollow">http://www.apple.com/macosx/technology/#grandcentral</a> [apple.com]</htmltext>
<tokenext>And It will be implemented if not allready in freeBSD... after that maybe Linux will follow if that group of people understand the significanse of the technology .
Under OSX 10.6.x it 's called Grand Central Dispatch .
And is opensourced under a different namne libdispatch , it utilise block that has been sent for standardisation and implemented int C , C + + , and objC .
Gcc compiler supports blocks it 's just upp to you developers to utilize it .
Automatic muliti threding , opimiates automatically for the hardware the user uses without compilation , all is done .
Only thing that the developer needs to do is to state what parts of their code should be mulit thredded .
The system desides amount of threads , syncronisation and all the rest .
Programming for the rest of us .
Souns likte too good to be true , an Apple fanboys geek talk or trolling .
Why dont you check it out yourselfes .
MS. could benefit from the blocks , but still need to implement their own GCD or opensource their kernel , If i understand the Apache license correctly which I hav n't read myself .
Apple opensource : http : //opensource.apple.com/source/libdispatch/libdispatch-84.5.3/ [ apple.com ] Apple white paper on GCD : http : //www.apple.com/macosx/technology/ # grandcentral [ apple.com ]</tokentext>
<sentencetext>And It will be implemented if not allready in freeBSD... after that maybe Linux will follow if that group of people understand the significanse of the technology.
Under OSX 10.6.x it's called Grand Central Dispatch.
And is opensourced under a different namne libdispatch, it utilise block that has been sent for standardisation and implemented int C, C++, and objC.
Gcc compiler supports blocks it's just upp to you developers to utilize it.
Automatic muliti threding, opimiates automatically for the hardware the user uses without compilation, all is done.
Only thing that the developer needs to do is to state what parts of their code should be mulit thredded.
The system desides amount of threads, syncronisation and all the rest.
Programming for the rest of us.
Souns likte too good to be true, an Apple fanboys geek talk or trolling.
Why dont you check it out yourselfes.
MS. could benefit from the blocks, but still need to implement their own GCD or opensource their kernel, If i understand the Apache license correctly which I havn't read myself.
Apple opensource : http://opensource.apple.com/source/libdispatch/libdispatch-84.5.3/ [apple.com]
Apple white paper on GCD: http://www.apple.com/macosx/technology/#grandcentral [apple.com]</sentencetext>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_102</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561670
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564360
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_55</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562378
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31568612
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_57</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561530
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561862
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561950
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563534
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561670
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564774
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_71</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561530
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562140
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_62</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561624
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561822
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564310
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_45</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561542
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562544
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_47</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561768
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561994
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_61</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561624
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561822
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562194
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564334
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561542
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561816
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564070
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_52</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561884
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564232
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_103</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561798
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562476
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561934
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563340
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_38</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561772
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562988
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_86</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561934
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562574
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564844
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561542
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562174
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561950
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564098
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561934
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562534
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_77</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561950
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31580218
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_79</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562014
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562784
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_100</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561952
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_82</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561530
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563250
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_53</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31566878
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_44</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563688
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_67</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561934
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31568862
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_69</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561910
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31571038
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_60</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561798
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562272
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_43</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561554
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564882
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564134
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_74</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561884
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31565458
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_37</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561772
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563830
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564852
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_50</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561772
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561874
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562182
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561542
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561816
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562626
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561542
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561816
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563158
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_36</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561934
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563218
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561578
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562072
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_98</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561556
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564296
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_92</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561556
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561770
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561506
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562332
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31567664
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_75</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561578
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562716
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_66</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561530
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561802
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_42</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561542
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561816
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562764
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_65</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561624
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561822
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561992
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_56</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561506
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562332
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31566068
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562014
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31569244
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_72</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562378
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31565618
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561768
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561878
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_35</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561670
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562206
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_93</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561624
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561822
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564522
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_95</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561554
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563496
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561542
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562746
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561624
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561754
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_104</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561542
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561816
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563804
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_85</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561506
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561592
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_48</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561934
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563118
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562196
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_90</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561624
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561846
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564054
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_64</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561736
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31568878
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562378
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563730
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_78</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561578
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564548
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_81</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561772
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564104
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_54</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561798
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563926
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562648
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564182
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31569032
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561768
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562324
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561670
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564268
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_101</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562014
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31567666
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561554
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563228
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_84</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561624
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561822
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562194
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31565354
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561530
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561960
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561736
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562262
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561542
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562052
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_83</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561910
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31578906
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_46</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561530
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562124
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_80</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561578
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561676
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31569202
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_51</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561670
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562636
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561624
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561822
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562180
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31569220
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_76</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561554
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562210
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_39</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561554
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562418
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_70</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562378
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563442
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_41</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561772
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562610
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_99</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561934
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31579674
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561624
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563174
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561852
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_89</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561798
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564374
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561624
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561822
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562194
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564264
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31578778
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_94</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561554
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562166
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561772
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563144
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_68</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562164
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_96</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562604
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_59</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561934
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564278
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562734
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_73</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562038
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_58</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561530
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562366
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_49</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561624
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562030
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_40</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561542
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561816
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562558
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_63</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561798
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562542
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561772
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31567752
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_97</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562378
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563316
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_88</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561772
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563684
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_91</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561772
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562702
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562450
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_87</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562144
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_21_2345243_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561624
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561822
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561972
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_21_2345243.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561554
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563496
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563228
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564882
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562210
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562418
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562166
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_21_2345243.22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561578
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562716
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561676
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31569202
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564548
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562072
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_21_2345243.25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561624
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561822
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561992
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564522
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562194
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564334
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31565354
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564264
----http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31578778
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562180
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31569220
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561972
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564310
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563174
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561754
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561846
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564054
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562030
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_21_2345243.16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561518
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_21_2345243.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561670
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562636
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562206
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564774
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564360
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564268
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_21_2345243.23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561768
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561878
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562324
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561994
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_21_2345243.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561556
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564296
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561770
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_21_2345243.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561736
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562262
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31568878
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_21_2345243.20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564602
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_21_2345243.21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562014
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31569244
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562784
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31567666
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_21_2345243.19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561558
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562648
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562038
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561952
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564134
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562604
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562734
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562144
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561772
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562610
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563830
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564852
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562702
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563684
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561874
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562182
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562988
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564104
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563144
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31567752
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563688
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561852
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562196
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31566878
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561950
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564098
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563534
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31580218
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562164
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562450
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_21_2345243.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564100
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_21_2345243.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561884
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564232
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31565458
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_21_2345243.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561910
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31571038
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31578906
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_21_2345243.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561798
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564374
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562476
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563926
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562542
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562272
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_21_2345243.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561530
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561862
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562140
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561802
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561960
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563250
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562124
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562366
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_21_2345243.17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561548
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_21_2345243.18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561506
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561592
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562332
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31567664
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31566068
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_21_2345243.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564182
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31569032
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_21_2345243.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562378
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31565618
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31568612
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563316
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563730
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563442
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_21_2345243.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561934
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562574
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564844
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31579674
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563340
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31568862
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562534
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564278
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563118
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563218
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_21_2345243.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564150
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_21_2345243.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562026
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_21_2345243.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562786
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_21_2345243.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561542
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31561816
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562558
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562626
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563158
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31564070
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562764
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31563804
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562544
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562052
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562746
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31562174
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_21_2345243.24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_21_2345243.31566998
</commentlist>
</conversation>
