<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_06_01_1232206</id>
	<title>Can "Page's Law" Be Broken?</title>
	<author>CmdrTaco</author>
	<datestamp>1243860660000</datestamp>
	<htmltext><a href="mailto:theodp@aol.com" rel="nofollow">theodp</a> writes <i>"Speaking at the <a href="http://code.google.com/events/io/">Google I/O</a> Developer Conference, <a href="http://valleywag.gawker.com/5272300/pages-law-is-google-founders-next+best-shot-at-immortality">Sergey Brin described Google's efforts to defeat "Page's Law,"</a> the tendency of software to get twice as slow every 18 months. 'Fortunately, the hardware folks offset that,' <a href="http://itmanagement.earthweb.com/entdev/article.php/3822371/Google-vs-Pages-Law.htm">Brin joked</a>. 'We would like to break Page's Law and have our software become increasingly fast on the same hardware.' Page, of course, refers to Google co-founder Larry Page, last seen <a href="http://www.youtube.com/watch?v=qFb2rvmrahc">delivering a nice from-the-heart commencement address at Michigan</a> that's worth a watch (<a href="http://www.google.com/intl/en/press/annc/20090502-page-commencement.html">or read</a>)."</i></htmltext>
<tokenext>theodp writes " Speaking at the Google I/O Developer Conference , Sergey Brin described Google 's efforts to defeat " Page 's Law , " the tendency of software to get twice as slow every 18 months .
'Fortunately , the hardware folks offset that, ' Brin joked .
'We would like to break Page 's Law and have our software become increasingly fast on the same hardware .
' Page , of course , refers to Google co-founder Larry Page , last seen delivering a nice from-the-heart commencement address at Michigan that 's worth a watch ( or read ) .
"</tokentext>
<sentencetext>theodp writes "Speaking at the Google I/O Developer Conference, Sergey Brin described Google's efforts to defeat "Page's Law," the tendency of software to get twice as slow every 18 months.
'Fortunately, the hardware folks offset that,' Brin joked.
'We would like to break Page's Law and have our software become increasingly fast on the same hardware.
' Page, of course, refers to Google co-founder Larry Page, last seen delivering a nice from-the-heart commencement address at Michigan that's worth a watch (or read).
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167063</id>
	<title>law!?!</title>
	<author>iCodemonkey</author>
	<datestamp>1243866780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>but I don't want to break any laws.</htmltext>
<tokenext>but I do n't want to break any laws .</tokentext>
<sentencetext>but I don't want to break any laws.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28176531</id>
	<title>Re:Ask Apple how they do it.</title>
	<author>toddestan</author>
	<datestamp>1243866780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You basically start out with a crappy, bloated, unoptimized piece of software that was so terrible only the Apple from the late 90's/early 2000's could have put it out, and then apply the optimizations that should have been in the original release later, so you can gloat about how much faster each release is.</p><p>Note that Microsoft seems to be doing something similar with Vista -&gt; Vista SP1 -&gt; Windows 7 (not sure where Vista SP2 fits in yet), though Vista started out better than the original release of OSX.</p></htmltext>
<tokenext>You basically start out with a crappy , bloated , unoptimized piece of software that was so terrible only the Apple from the late 90 's/early 2000 's could have put it out , and then apply the optimizations that should have been in the original release later , so you can gloat about how much faster each release is.Note that Microsoft seems to be doing something similar with Vista - &gt; Vista SP1 - &gt; Windows 7 ( not sure where Vista SP2 fits in yet ) , though Vista started out better than the original release of OSX .</tokentext>
<sentencetext>You basically start out with a crappy, bloated, unoptimized piece of software that was so terrible only the Apple from the late 90's/early 2000's could have put it out, and then apply the optimizations that should have been in the original release later, so you can gloat about how much faster each release is.Note that Microsoft seems to be doing something similar with Vista -&gt; Vista SP1 -&gt; Windows 7 (not sure where Vista SP2 fits in yet), though Vista started out better than the original release of OSX.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167375</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28174237</id>
	<title>Doesn't anyone teach benchmarking at uni?</title>
	<author>C0801 p475 ur 81115</author>
	<datestamp>1243852800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I'm constantly explaining to junior devs how to use indexes and other performance features, but the basics of measure, test, measure seem to be lost on all of them. Where is the scientific method in CS these days?</htmltext>
<tokenext>I 'm constantly explaining to junior devs how to use indexes and other performance features , but the basics of measure , test , measure seem to be lost on all of them .
Where is the scientific method in CS these days ?</tokentext>
<sentencetext>I'm constantly explaining to junior devs how to use indexes and other performance features, but the basics of measure, test, measure seem to be lost on all of them.
Where is the scientific method in CS these days?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168645</id>
	<title>Re:Ask Apple how they do it.</title>
	<author>blueZ3</author>
	<datestamp>1243873680000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Because 10.0 sucked? I don't know if it was intentional or not, but that was slow enough that I noticed that speed was an issue (and I was using only the most pedestrian of software--browser and email). It was as if the OS was completely unoptimized. If subsequent releases had gotten slower, they'd have been going backwards.</p><p>My primary computer, my wife's computer, and our HTPC are all Macs, so I'm not trolling... but damn was it slow.</p></htmltext>
<tokenext>Because 10.0 sucked ?
I do n't know if it was intentional or not , but that was slow enough that I noticed that speed was an issue ( and I was using only the most pedestrian of software--browser and email ) .
It was as if the OS was completely unoptimized .
If subsequent releases had gotten slower , they 'd have been going backwards.My primary computer , my wife 's computer , and our HTPC are all Macs , so I 'm not trolling... but damn was it slow .</tokentext>
<sentencetext>Because 10.0 sucked?
I don't know if it was intentional or not, but that was slow enough that I noticed that speed was an issue (and I was using only the most pedestrian of software--browser and email).
It was as if the OS was completely unoptimized.
If subsequent releases had gotten slower, they'd have been going backwards.My primary computer, my wife's computer, and our HTPC are all Macs, so I'm not trolling... but damn was it slow.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167375</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28171545</id>
	<title>Wirth's Law</title>
	<author>pizza\_milkshake</author>
	<datestamp>1243886460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><a href="http://en.wikipedia.org/wiki/Wirth's\_law" title="wikipedia.org">http://en.wikipedia.org/wiki/Wirth's\_law</a> [wikipedia.org]</htmltext>
<tokenext>http : //en.wikipedia.org/wiki/Wirth 's \ _law [ wikipedia.org ]</tokentext>
<sentencetext>http://en.wikipedia.org/wiki/Wirth's\_law [wikipedia.org]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28173171</id>
	<title>Re:Of Course</title>
	<author>simplerThanPossible</author>
	<datestamp>1243849020000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Yes, the 2nd ed of the Dragon book rewrote their sample parser in OO. It was <i>much</i> simpler, clearer and cleaner in procedural version in the 1st ed.</p></htmltext>
<tokenext>Yes , the 2nd ed of the Dragon book rewrote their sample parser in OO .
It was much simpler , clearer and cleaner in procedural version in the 1st ed .</tokentext>
<sentencetext>Yes, the 2nd ed of the Dragon book rewrote their sample parser in OO.
It was much simpler, clearer and cleaner in procedural version in the 1st ed.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168151</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28181127</id>
	<title>Re:They probably will.</title>
	<author>irchans</author>
	<datestamp>1243953120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>That was a great post.</p><p>Microsoft does not make it's operating systems faster.  (Windows 7 might be an exception.)  Instead, it adds more capabilities.  Why?  Consider Microsoft's incentives for increasing the speed of its operating system.  Basically, Microsoft increases its profits if money invested in improving operating system speed creates more money through sales i.e. the derivative of profit from sales with respect to the cost of improving speed is greater than 1.</p><p>Let</p><p>PFS be Profit From Sales<br>NPP be Net Profit per sale<br>PS  be Products Sold<br>S   be Speed<br>CI  be the cost of improvement<br>D(x) be the differential of x.</p><p>Then</p><p>PFS = NPP * PS.</p><p>D(PFS) / D(CI)<br>= D(PFS) / D(S)  * D(S)/ D(CI)<br>= (PS* D(NPP) / D(S)  + NPP * D(PS) / D(S)) * D(S) / D(CI)</p><p>Microsoft does not improve the speed of it's operating systems too much because the consumer does not respond to a 10\% improvement in speed with more sales or by paying more for each sale.</p><p>On the other hand, if Google were running a Google specific operating system on many machines, then the economics are very different.  I am guessing that Google has spent about 1 billion dollars on computer hardware.  They may be running the equivalent of 1 million desktop computers at a cost of say 200 million dollars a year in maintenance.  If they can improve the speed of their operating system by 10\%, then they save 20 million dollars per year.  So if it costs them 40 million dollars to make a 10\% speed improvement, they should (and probably will) do it.  Forty million dollars corresponds to about 400,000 programmer hours or 200 programmer-years.  So if in one year 200 programmers could improve the speed of Google's machines by 10\%, then Google will do that.</p><p>The pressure on Microsoft to increase it's speed is not nearly as strong.</p></htmltext>
<tokenext>That was a great post.Microsoft does not make it 's operating systems faster .
( Windows 7 might be an exception .
) Instead , it adds more capabilities .
Why ? Consider Microsoft 's incentives for increasing the speed of its operating system .
Basically , Microsoft increases its profits if money invested in improving operating system speed creates more money through sales i.e .
the derivative of profit from sales with respect to the cost of improving speed is greater than 1.LetPFS be Profit From SalesNPP be Net Profit per salePS be Products SoldS be SpeedCI be the cost of improvementD ( x ) be the differential of x.ThenPFS = NPP * PS.D ( PFS ) / D ( CI ) = D ( PFS ) / D ( S ) * D ( S ) / D ( CI ) = ( PS * D ( NPP ) / D ( S ) + NPP * D ( PS ) / D ( S ) ) * D ( S ) / D ( CI ) Microsoft does not improve the speed of it 's operating systems too much because the consumer does not respond to a 10 \ % improvement in speed with more sales or by paying more for each sale.On the other hand , if Google were running a Google specific operating system on many machines , then the economics are very different .
I am guessing that Google has spent about 1 billion dollars on computer hardware .
They may be running the equivalent of 1 million desktop computers at a cost of say 200 million dollars a year in maintenance .
If they can improve the speed of their operating system by 10 \ % , then they save 20 million dollars per year .
So if it costs them 40 million dollars to make a 10 \ % speed improvement , they should ( and probably will ) do it .
Forty million dollars corresponds to about 400,000 programmer hours or 200 programmer-years .
So if in one year 200 programmers could improve the speed of Google 's machines by 10 \ % , then Google will do that.The pressure on Microsoft to increase it 's speed is not nearly as strong .</tokentext>
<sentencetext>That was a great post.Microsoft does not make it's operating systems faster.
(Windows 7 might be an exception.
)  Instead, it adds more capabilities.
Why?  Consider Microsoft's incentives for increasing the speed of its operating system.
Basically, Microsoft increases its profits if money invested in improving operating system speed creates more money through sales i.e.
the derivative of profit from sales with respect to the cost of improving speed is greater than 1.LetPFS be Profit From SalesNPP be Net Profit per salePS  be Products SoldS   be SpeedCI  be the cost of improvementD(x) be the differential of x.ThenPFS = NPP * PS.D(PFS) / D(CI)= D(PFS) / D(S)  * D(S)/ D(CI)= (PS* D(NPP) / D(S)  + NPP * D(PS) / D(S)) * D(S) / D(CI)Microsoft does not improve the speed of it's operating systems too much because the consumer does not respond to a 10\% improvement in speed with more sales or by paying more for each sale.On the other hand, if Google were running a Google specific operating system on many machines, then the economics are very different.
I am guessing that Google has spent about 1 billion dollars on computer hardware.
They may be running the equivalent of 1 million desktop computers at a cost of say 200 million dollars a year in maintenance.
If they can improve the speed of their operating system by 10\%, then they save 20 million dollars per year.
So if it costs them 40 million dollars to make a 10\% speed improvement, they should (and probably will) do it.
Forty million dollars corresponds to about 400,000 programmer hours or 200 programmer-years.
So if in one year 200 programmers could improve the speed of Google's machines by 10\%, then Google will do that.The pressure on Microsoft to increase it's speed is not nearly as strong.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166795</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166881</id>
	<title>Page's Law.</title>
	<author>C\_Kode</author>
	<datestamp>1243865760000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Sounds like someone is trying to cement their legacy in history by stamping their name on common knowledge.<nobr> <wbr></nobr>:-)</p></htmltext>
<tokenext>Sounds like someone is trying to cement their legacy in history by stamping their name on common knowledge .
: - )</tokentext>
<sentencetext>Sounds like someone is trying to cement their legacy in history by stamping their name on common knowledge.
:-)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168741</id>
	<title>Re:Of Course</title>
	<author>grumbel</author>
	<datestamp>1243874100000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>Well, that depends, OOP alone is certainly not the guilty one for causing all the slowdown, but abstraction in general is guilty for a lot of things. Todays software is just way to removed from the actual hardware to allow certain kinds of optimizations. Random example: When you have a 2D game on older hardware (say GBA or similar) you could scroll by manipulating two bytes that represent the scroll offset, everything else was done in hardware. How do you scroll in a 2D game today? Fullscreen refreshes, as you don't have any access to the hardware to allow faster ways to scroll. So in the worst case you have to manipulate not 2 bytes, but around six million of them. Thats quite a few orders of magnitude difference there, that you can't really optimize away today.</p><p>Now for real games of course you might have a GPU that can handle that amount of speed and since modern games are 3D you don't really have a choice of not doing fullscreen refreshes to begin with, but as soon as you look into web games you can see all the problems, games in Flash or Javascript most of the time run completly terrible, worse then games you might have played a decade or two ago, because those games don't even have GPU access but instead pump their data through layers upon layers of abstraction before they finally hit the graphics card.</p><p>In the end I think the core problem is simply that todays software is written far to often for an abstract black box, instead of for a actual hardware. Especially web development is just way to removed from the actual machine to even have a chance of running quickly. To make things really fast you would have to optimize all layers of abstractions that the code has to run through, but most often you just don't have the control over it, as development is far more spread out these days. Its no longer your code and the hardware, its your code, dozens or even hundreds of libraries and then maybe far far away some piece of hardware again.</p></htmltext>
<tokenext>Well , that depends , OOP alone is certainly not the guilty one for causing all the slowdown , but abstraction in general is guilty for a lot of things .
Todays software is just way to removed from the actual hardware to allow certain kinds of optimizations .
Random example : When you have a 2D game on older hardware ( say GBA or similar ) you could scroll by manipulating two bytes that represent the scroll offset , everything else was done in hardware .
How do you scroll in a 2D game today ?
Fullscreen refreshes , as you do n't have any access to the hardware to allow faster ways to scroll .
So in the worst case you have to manipulate not 2 bytes , but around six million of them .
Thats quite a few orders of magnitude difference there , that you ca n't really optimize away today.Now for real games of course you might have a GPU that can handle that amount of speed and since modern games are 3D you do n't really have a choice of not doing fullscreen refreshes to begin with , but as soon as you look into web games you can see all the problems , games in Flash or Javascript most of the time run completly terrible , worse then games you might have played a decade or two ago , because those games do n't even have GPU access but instead pump their data through layers upon layers of abstraction before they finally hit the graphics card.In the end I think the core problem is simply that todays software is written far to often for an abstract black box , instead of for a actual hardware .
Especially web development is just way to removed from the actual machine to even have a chance of running quickly .
To make things really fast you would have to optimize all layers of abstractions that the code has to run through , but most often you just do n't have the control over it , as development is far more spread out these days .
Its no longer your code and the hardware , its your code , dozens or even hundreds of libraries and then maybe far far away some piece of hardware again .</tokentext>
<sentencetext>Well, that depends, OOP alone is certainly not the guilty one for causing all the slowdown, but abstraction in general is guilty for a lot of things.
Todays software is just way to removed from the actual hardware to allow certain kinds of optimizations.
Random example: When you have a 2D game on older hardware (say GBA or similar) you could scroll by manipulating two bytes that represent the scroll offset, everything else was done in hardware.
How do you scroll in a 2D game today?
Fullscreen refreshes, as you don't have any access to the hardware to allow faster ways to scroll.
So in the worst case you have to manipulate not 2 bytes, but around six million of them.
Thats quite a few orders of magnitude difference there, that you can't really optimize away today.Now for real games of course you might have a GPU that can handle that amount of speed and since modern games are 3D you don't really have a choice of not doing fullscreen refreshes to begin with, but as soon as you look into web games you can see all the problems, games in Flash or Javascript most of the time run completly terrible, worse then games you might have played a decade or two ago, because those games don't even have GPU access but instead pump their data through layers upon layers of abstraction before they finally hit the graphics card.In the end I think the core problem is simply that todays software is written far to often for an abstract black box, instead of for a actual hardware.
Especially web development is just way to removed from the actual machine to even have a chance of running quickly.
To make things really fast you would have to optimize all layers of abstractions that the code has to run through, but most often you just don't have the control over it, as development is far more spread out these days.
Its no longer your code and the hardware, its your code, dozens or even hundreds of libraries and then maybe far far away some piece of hardware again.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167511</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28171081</id>
	<title>Re:Of Course</title>
	<author>Anonymous</author>
	<datestamp>1243884720000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><div class="quote"><p><b>"This often explains why old languages like C, Cobol etc. are able to do the same thing as a program written in C++, Java or C# at the fraction of the resource cost and at much greater speed. The disadvantage is that the old languages require more skills from the programmer to avoid the classical problems of deadlocks and race conditions as well as having to implement functionality for linked lists etc."</b> - by Z00L00K (682162) on Monday June 01, @09:24AM (#28166975) Homepage</p></div><p>True! Because, iirc? For every object you instance, it's an added 472 bytes of memory used by said object, @ least using Microsoft or Borland Compilers for Win32 PE development.</p><p>NOW - Though that might not seem like a lot, you have to consider that the gui alone is probably composed of N objects, &amp; whatever classes you make full-blown objects will be adding additional overheads, per each one created.</p><p>(Plus, Hey - I don't need an object-oriented design to do a "Hello World" level (meaning simpler/smaller) program:  Procedural programming does the job nicely!)</p><p>APK</p></div>
	</htmltext>
<tokenext>" This often explains why old languages like C , Cobol etc .
are able to do the same thing as a program written in C + + , Java or C # at the fraction of the resource cost and at much greater speed .
The disadvantage is that the old languages require more skills from the programmer to avoid the classical problems of deadlocks and race conditions as well as having to implement functionality for linked lists etc .
" - by Z00L00K ( 682162 ) on Monday June 01 , @ 09 : 24AM ( # 28166975 ) HomepageTrue !
Because , iirc ?
For every object you instance , it 's an added 472 bytes of memory used by said object , @ least using Microsoft or Borland Compilers for Win32 PE development.NOW - Though that might not seem like a lot , you have to consider that the gui alone is probably composed of N objects , &amp; whatever classes you make full-blown objects will be adding additional overheads , per each one created .
( Plus , Hey - I do n't need an object-oriented design to do a " Hello World " level ( meaning simpler/smaller ) program : Procedural programming does the job nicely !
) APK</tokentext>
<sentencetext>"This often explains why old languages like C, Cobol etc.
are able to do the same thing as a program written in C++, Java or C# at the fraction of the resource cost and at much greater speed.
The disadvantage is that the old languages require more skills from the programmer to avoid the classical problems of deadlocks and race conditions as well as having to implement functionality for linked lists etc.
" - by Z00L00K (682162) on Monday June 01, @09:24AM (#28166975) HomepageTrue!
Because, iirc?
For every object you instance, it's an added 472 bytes of memory used by said object, @ least using Microsoft or Borland Compilers for Win32 PE development.NOW - Though that might not seem like a lot, you have to consider that the gui alone is probably composed of N objects, &amp; whatever classes you make full-blown objects will be adding additional overheads, per each one created.
(Plus, Hey - I don't need an object-oriented design to do a "Hello World" level (meaning simpler/smaller) program:  Procedural programming does the job nicely!
)APK
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28169089</id>
	<title>Re:Of Course</title>
	<author>Anonymous</author>
	<datestamp>1243875480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>*points to his later comment*</p><p><a href="http://slashdot.org/comments.pl?sid=1252121&amp;cid=28168729" title="slashdot.org">http://slashdot.org/comments.pl?sid=1252121&amp;cid=28168729</a> [slashdot.org]</p></htmltext>
<tokenext>* points to his later comment * http : //slashdot.org/comments.pl ? sid = 1252121&amp;cid = 28168729 [ slashdot.org ]</tokentext>
<sentencetext>*points to his later comment*http://slashdot.org/comments.pl?sid=1252121&amp;cid=28168729 [slashdot.org]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167939</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168139</id>
	<title>Re:Of Course</title>
	<author>Anonymous</author>
	<datestamp>1243871460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>And this is often the curse of object-oriented programming. Objects carries more data than necessary for many of the uses of the object. Only a few cases exists where all the object data is used. A lot of object-oriented programming is somewhat like using 18-wheelers for grocery shopping.</i></p><p>Surely this is a problem begging a solution in the form of smarter compilers?</p></htmltext>
<tokenext>And this is often the curse of object-oriented programming .
Objects carries more data than necessary for many of the uses of the object .
Only a few cases exists where all the object data is used .
A lot of object-oriented programming is somewhat like using 18-wheelers for grocery shopping.Surely this is a problem begging a solution in the form of smarter compilers ?</tokentext>
<sentencetext>And this is often the curse of object-oriented programming.
Objects carries more data than necessary for many of the uses of the object.
Only a few cases exists where all the object data is used.
A lot of object-oriented programming is somewhat like using 18-wheelers for grocery shopping.Surely this is a problem begging a solution in the form of smarter compilers?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167939</id>
	<title>Re:Of Course</title>
	<author>Anonymous</author>
	<datestamp>1243870560000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext>Wow, maybe you should have listened to your senior programmers. Faster execution speed is not often the goal. Static allocation is deterministic. Slower and deterministic is better in certain types of programming, than faster and non-deterministic. You scoff at their O(N^2) algorithm without even considering all the ramifications. Let me guess: Java programmer?</htmltext>
<tokenext>Wow , maybe you should have listened to your senior programmers .
Faster execution speed is not often the goal .
Static allocation is deterministic .
Slower and deterministic is better in certain types of programming , than faster and non-deterministic .
You scoff at their O ( N ^ 2 ) algorithm without even considering all the ramifications .
Let me guess : Java programmer ?</tokentext>
<sentencetext>Wow, maybe you should have listened to your senior programmers.
Faster execution speed is not often the goal.
Static allocation is deterministic.
Slower and deterministic is better in certain types of programming, than faster and non-deterministic.
You scoff at their O(N^2) algorithm without even considering all the ramifications.
Let me guess: Java programmer?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167259</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168675</id>
	<title>Re:They probably will.</title>
	<author>Dhalka226</author>
	<datestamp>1243873800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It <em>is</em> an incentive, don't get me wrong, but I don't think it is as big as you seem to suggest.

</p><p>As you said, these things run on Google's servers and communicate through Google's pipes.  With the exception of the amount of data that traverses their Internet-bound pipes--which doesn't seem to be what they're referring to--all of these are sunk costs.  They can't just call up their providers and downgrade their usage for a while.  I don't see Google getting rid of (trashing, selling, donating) these machines if new efficiencies make them obsolete; I don't even particularly see them flipping the power switches to save on power costs.  These machines <em>will</em> be needed again, it's just a question of when.

</p><p>So the efficiency is, essentially, in slowing down the purchase of new hardware.  It's certainly a pressure, but a fairly mild one considering that buying new hardware usually goes hand-in-hand with increase in demand and thus increase in revenue.  In a sense, if it DIDN'T all run on their own equipment I think the pressure would be larger.  Servicing the same people with a smaller monthly bill is pretty easy to sell ANYBODY on; servicing more people in the future on less new hardware than you would otherwise buy is good, but less compelling.</p></htmltext>
<tokenext>It is an incentive , do n't get me wrong , but I do n't think it is as big as you seem to suggest .
As you said , these things run on Google 's servers and communicate through Google 's pipes .
With the exception of the amount of data that traverses their Internet-bound pipes--which does n't seem to be what they 're referring to--all of these are sunk costs .
They ca n't just call up their providers and downgrade their usage for a while .
I do n't see Google getting rid of ( trashing , selling , donating ) these machines if new efficiencies make them obsolete ; I do n't even particularly see them flipping the power switches to save on power costs .
These machines will be needed again , it 's just a question of when .
So the efficiency is , essentially , in slowing down the purchase of new hardware .
It 's certainly a pressure , but a fairly mild one considering that buying new hardware usually goes hand-in-hand with increase in demand and thus increase in revenue .
In a sense , if it DID N'T all run on their own equipment I think the pressure would be larger .
Servicing the same people with a smaller monthly bill is pretty easy to sell ANYBODY on ; servicing more people in the future on less new hardware than you would otherwise buy is good , but less compelling .</tokentext>
<sentencetext>It is an incentive, don't get me wrong, but I don't think it is as big as you seem to suggest.
As you said, these things run on Google's servers and communicate through Google's pipes.
With the exception of the amount of data that traverses their Internet-bound pipes--which doesn't seem to be what they're referring to--all of these are sunk costs.
They can't just call up their providers and downgrade their usage for a while.
I don't see Google getting rid of (trashing, selling, donating) these machines if new efficiencies make them obsolete; I don't even particularly see them flipping the power switches to save on power costs.
These machines will be needed again, it's just a question of when.
So the efficiency is, essentially, in slowing down the purchase of new hardware.
It's certainly a pressure, but a fairly mild one considering that buying new hardware usually goes hand-in-hand with increase in demand and thus increase in revenue.
In a sense, if it DIDN'T all run on their own equipment I think the pressure would be larger.
Servicing the same people with a smaller monthly bill is pretty easy to sell ANYBODY on; servicing more people in the future on less new hardware than you would otherwise buy is good, but less compelling.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166795</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166795</id>
	<title>They probably will.</title>
	<author>Anonymous</author>
	<datestamp>1243865280000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext>I'd suspect that Google probably will. Not because of any OMG special Google Genius(tm), but because of simple economics.<br> <br>

Google's apps are largely web based. They run on Google's servers and communicate through Google's pipes.  Since Google pays for every server side cycle, and every byte sent back and forth, they have an obvious incentive to economize. Since Google runs homogenous services on a vast scale, even tiny economies end up being worth a lot of money. <br> <br>
Compare this to the usual client application model: Even if the scale is equivalent, the maker of the software doesn't pay for the computational resources. Their only pressure is indirect(i.e. customers who don't buy because their machines don't meet spec, or customers who get pissed off because performance sucks). They thus have a far smaller incentive to watch their resource consumption.
<br> <br> The client side might still be subject to bloat, since Google doesn't pay for those cycles; but I suspect competitive pressure, and the uneven javascript landscape, will have an effect here as well. If you are trying to sell the virtues of webapps, your apps are (despite the latency inherent in web communication) going to have to exhibit adequate responsiveness under suboptimal conditions(i.e. IE 6, cellphones, cellphones running IE 6), which provides the built in "develop for resource constrained systems" pressure.</htmltext>
<tokenext>I 'd suspect that Google probably will .
Not because of any OMG special Google Genius ( tm ) , but because of simple economics .
Google 's apps are largely web based .
They run on Google 's servers and communicate through Google 's pipes .
Since Google pays for every server side cycle , and every byte sent back and forth , they have an obvious incentive to economize .
Since Google runs homogenous services on a vast scale , even tiny economies end up being worth a lot of money .
Compare this to the usual client application model : Even if the scale is equivalent , the maker of the software does n't pay for the computational resources .
Their only pressure is indirect ( i.e .
customers who do n't buy because their machines do n't meet spec , or customers who get pissed off because performance sucks ) .
They thus have a far smaller incentive to watch their resource consumption .
The client side might still be subject to bloat , since Google does n't pay for those cycles ; but I suspect competitive pressure , and the uneven javascript landscape , will have an effect here as well .
If you are trying to sell the virtues of webapps , your apps are ( despite the latency inherent in web communication ) going to have to exhibit adequate responsiveness under suboptimal conditions ( i.e .
IE 6 , cellphones , cellphones running IE 6 ) , which provides the built in " develop for resource constrained systems " pressure .</tokentext>
<sentencetext>I'd suspect that Google probably will.
Not because of any OMG special Google Genius(tm), but because of simple economics.
Google's apps are largely web based.
They run on Google's servers and communicate through Google's pipes.
Since Google pays for every server side cycle, and every byte sent back and forth, they have an obvious incentive to economize.
Since Google runs homogenous services on a vast scale, even tiny economies end up being worth a lot of money.
Compare this to the usual client application model: Even if the scale is equivalent, the maker of the software doesn't pay for the computational resources.
Their only pressure is indirect(i.e.
customers who don't buy because their machines don't meet spec, or customers who get pissed off because performance sucks).
They thus have a far smaller incentive to watch their resource consumption.
The client side might still be subject to bloat, since Google doesn't pay for those cycles; but I suspect competitive pressure, and the uneven javascript landscape, will have an effect here as well.
If you are trying to sell the virtues of webapps, your apps are (despite the latency inherent in web communication) going to have to exhibit adequate responsiveness under suboptimal conditions(i.e.
IE 6, cellphones, cellphones running IE 6), which provides the built in "develop for resource constrained systems" pressure.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168151</id>
	<title>Re:Of Course</title>
	<author>AmiMoJo</author>
	<datestamp>1243871580000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>OO was never designed for speed or efficiency, only ease of modelling business systems. It became a fashionable buzz-word and suddenly everyone wanted to use it for everything, so you end up in a situation where a lot of OO programs really only use OO for allocating memory for new objects.</p><p>I'm not trying to be a troll here, I just find it odd that OO is considered the be-all and end-all of programming to the point where people write horribly inefficient code just because they want to use it. OO has it's place, and it does what it was designed to do quite well, but people should not shy away from writing quality non-OO code. I think a lot of programmings come up knowing nothing but OO these days, which is a bit scary...</p></htmltext>
<tokenext>OO was never designed for speed or efficiency , only ease of modelling business systems .
It became a fashionable buzz-word and suddenly everyone wanted to use it for everything , so you end up in a situation where a lot of OO programs really only use OO for allocating memory for new objects.I 'm not trying to be a troll here , I just find it odd that OO is considered the be-all and end-all of programming to the point where people write horribly inefficient code just because they want to use it .
OO has it 's place , and it does what it was designed to do quite well , but people should not shy away from writing quality non-OO code .
I think a lot of programmings come up knowing nothing but OO these days , which is a bit scary.. .</tokentext>
<sentencetext>OO was never designed for speed or efficiency, only ease of modelling business systems.
It became a fashionable buzz-word and suddenly everyone wanted to use it for everything, so you end up in a situation where a lot of OO programs really only use OO for allocating memory for new objects.I'm not trying to be a troll here, I just find it odd that OO is considered the be-all and end-all of programming to the point where people write horribly inefficient code just because they want to use it.
OO has it's place, and it does what it was designed to do quite well, but people should not shy away from writing quality non-OO code.
I think a lot of programmings come up knowing nothing but OO these days, which is a bit scary...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168711</id>
	<title>Re:Of Course</title>
	<author>Dr\_Barnowl</author>
	<datestamp>1243873980000</datestamp>
	<modclass>Troll</modclass>
	<modscore>0</modscore>
	<htmltext><p>Heavy is a relative term, of course.</p><p>My entire vim install folder : 21MB, including docs</p><p>Memory eaten by vim on loading - 8.6MB<br>Memory eaten by WINWORD.exe - well, it started at 17MB. All I did was let it sit there for a couple of minutes, and close the help browser, now it's eaten 20.3MB. Typed "Hello There", and it goes up to 21.4MB.</p><p>Memory consumed by vim for "Hello There" - 76 KB. Winword - 1.1MB</p><p>Hell, Word uses as much memory as vim does to load, just to save the file to Hello There.doc</p></htmltext>
<tokenext>Heavy is a relative term , of course.My entire vim install folder : 21MB , including docsMemory eaten by vim on loading - 8.6MBMemory eaten by WINWORD.exe - well , it started at 17MB .
All I did was let it sit there for a couple of minutes , and close the help browser , now it 's eaten 20.3MB .
Typed " Hello There " , and it goes up to 21.4MB.Memory consumed by vim for " Hello There " - 76 KB .
Winword - 1.1MBHell , Word uses as much memory as vim does to load , just to save the file to Hello There.doc</tokentext>
<sentencetext>Heavy is a relative term, of course.My entire vim install folder : 21MB, including docsMemory eaten by vim on loading - 8.6MBMemory eaten by WINWORD.exe - well, it started at 17MB.
All I did was let it sit there for a couple of minutes, and close the help browser, now it's eaten 20.3MB.
Typed "Hello There", and it goes up to 21.4MB.Memory consumed by vim for "Hello There" - 76 KB.
Winword - 1.1MBHell, Word uses as much memory as vim does to load, just to save the file to Hello There.doc</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166951</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28169313</id>
	<title>The really important comparison</title>
	<author>henrypijames</author>
	<datestamp>1243876440000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>... isn't software v. hardware, but speed v. functionality, i. e., in the history of most software, the decrease in speed is disproportional to the increase in functionality. Of course, "disproportional" is subjective, and new, advanced functionalities are generally more complicated and resource intensive than old, basic ones. So a simple reverse-linear relationship might be unrealistic, but when many software don't even manage to beat the reverse-quadratic ratio, there's definitely something wrong.</p></htmltext>
<tokenext>... is n't software v. hardware , but speed v. functionality , i. e. , in the history of most software , the decrease in speed is disproportional to the increase in functionality .
Of course , " disproportional " is subjective , and new , advanced functionalities are generally more complicated and resource intensive than old , basic ones .
So a simple reverse-linear relationship might be unrealistic , but when many software do n't even manage to beat the reverse-quadratic ratio , there 's definitely something wrong .</tokentext>
<sentencetext>... isn't software v. hardware, but speed v. functionality, i. e., in the history of most software, the decrease in speed is disproportional to the increase in functionality.
Of course, "disproportional" is subjective, and new, advanced functionalities are generally more complicated and resource intensive than old, basic ones.
So a simple reverse-linear relationship might be unrealistic, but when many software don't even manage to beat the reverse-quadratic ratio, there's definitely something wrong.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28169977</id>
	<title>Re:Of Course</title>
	<author>cstacy</author>
	<datestamp>1243879500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>OO was never designed for speed or efficiency, only ease of modelling business systems.</p></div><p>OO was designed for speed and efficiency for implementing complex applications such as operating systems.

(There, fixed that for you.)

I'm thinking of the OO that predated C++ or Java -- the stuff in Smalltalk and Lisp from the 1970s.  (Which still works better than the newer stuff...)</p></div>
	</htmltext>
<tokenext>OO was never designed for speed or efficiency , only ease of modelling business systems.OO was designed for speed and efficiency for implementing complex applications such as operating systems .
( There , fixed that for you .
) I 'm thinking of the OO that predated C + + or Java -- the stuff in Smalltalk and Lisp from the 1970s .
( Which still works better than the newer stuff... )</tokentext>
<sentencetext>OO was never designed for speed or efficiency, only ease of modelling business systems.OO was designed for speed and efficiency for implementing complex applications such as operating systems.
(There, fixed that for you.
)

I'm thinking of the OO that predated C++ or Java -- the stuff in Smalltalk and Lisp from the 1970s.
(Which still works better than the newer stuff...)
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168151</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28172105</id>
	<title>Re:Of Course</title>
	<author>mzs</author>
	<datestamp>1243888140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Sometimes your 15 year senior programmers have earned their salt, in fact they usually have. I would like to see your fancy C++ with templates stuff compile onto some of the proprietary toolkits I have seen for small ARM and gate array systems. Writing code that uses a number of fixed sized simple data structures all written in C makes it very easy to port it to embedded systems. The moment you use something that seems as innocuous as C++ exceptions, you're in a world of hurt the moment you step out of GCC, VC++, SUNWpro, or XLC.</p><p>Here is one story. There is a fellow that likes to run C++ stuff he has written on our systems. The smallest config we run with that stuff has 16MB RAM and a 200MHz processor and we use GCC, so it's okay. One day he wanted to run on a new bigger system, so I enabled the C++ run time libs, this added 17s to the boot and used dozens of MB of RAM, but whatever this was in a system with 128 MB RAM and 1GHZ cpu. Then we got newer boards like those but with 256 MB RAM and 1.3GHz cpu and gigabit. Occasionally the boards would reset. It turned-out that due to all the extra resources and the dynamic sizing his code used, every now and then the HW watchdog timer was not getting twiddled (0.5s on a system with realtime requirements of various things in some cases of 4ms) and the board was being reset.</p></htmltext>
<tokenext>Sometimes your 15 year senior programmers have earned their salt , in fact they usually have .
I would like to see your fancy C + + with templates stuff compile onto some of the proprietary toolkits I have seen for small ARM and gate array systems .
Writing code that uses a number of fixed sized simple data structures all written in C makes it very easy to port it to embedded systems .
The moment you use something that seems as innocuous as C + + exceptions , you 're in a world of hurt the moment you step out of GCC , VC + + , SUNWpro , or XLC.Here is one story .
There is a fellow that likes to run C + + stuff he has written on our systems .
The smallest config we run with that stuff has 16MB RAM and a 200MHz processor and we use GCC , so it 's okay .
One day he wanted to run on a new bigger system , so I enabled the C + + run time libs , this added 17s to the boot and used dozens of MB of RAM , but whatever this was in a system with 128 MB RAM and 1GHZ cpu .
Then we got newer boards like those but with 256 MB RAM and 1.3GHz cpu and gigabit .
Occasionally the boards would reset .
It turned-out that due to all the extra resources and the dynamic sizing his code used , every now and then the HW watchdog timer was not getting twiddled ( 0.5s on a system with realtime requirements of various things in some cases of 4ms ) and the board was being reset .</tokentext>
<sentencetext>Sometimes your 15 year senior programmers have earned their salt, in fact they usually have.
I would like to see your fancy C++ with templates stuff compile onto some of the proprietary toolkits I have seen for small ARM and gate array systems.
Writing code that uses a number of fixed sized simple data structures all written in C makes it very easy to port it to embedded systems.
The moment you use something that seems as innocuous as C++ exceptions, you're in a world of hurt the moment you step out of GCC, VC++, SUNWpro, or XLC.Here is one story.
There is a fellow that likes to run C++ stuff he has written on our systems.
The smallest config we run with that stuff has 16MB RAM and a 200MHz processor and we use GCC, so it's okay.
One day he wanted to run on a new bigger system, so I enabled the C++ run time libs, this added 17s to the boot and used dozens of MB of RAM, but whatever this was in a system with 128 MB RAM and 1GHZ cpu.
Then we got newer boards like those but with 256 MB RAM and 1.3GHz cpu and gigabit.
Occasionally the boards would reset.
It turned-out that due to all the extra resources and the dynamic sizing his code used, every now and then the HW watchdog timer was not getting twiddled (0.5s on a system with realtime requirements of various things in some cases of 4ms) and the board was being reset.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167259</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167535</id>
	<title>Re:Of Course</title>
	<author>morgan\_greywolf</author>
	<datestamp>1243869000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Heavier?  Yes.  But is it heavy on modern systems with plenty of processor and RAM?  No way.  It's my number one text editor for quick file edits.</p></htmltext>
<tokenext>Heavier ?
Yes. But is it heavy on modern systems with plenty of processor and RAM ?
No way .
It 's my number one text editor for quick file edits .</tokentext>
<sentencetext>Heavier?
Yes.  But is it heavy on modern systems with plenty of processor and RAM?
No way.
It's my number one text editor for quick file edits.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166951</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166817</id>
	<title>emulation layers</title>
	<author>Speare</author>
	<datestamp>1243865400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>When I was a little kid, I saw a new computing device: a Pacman cabinet at the local pinball parlour.</p><p>Since then, I've seen dozens of implementations of it, and they fall into two camps:  a knockoff that can hardly be called a Pacman-clone, or a full-up 100\% authentic duplicate of the original.  Of course the latter is done with emulation.  Every important detail of the old hardware can be emulated so a true ROM copy can be run with the same timing and everything behaves properly.  If you know the proper secret patterns through the maze, then the deterministic behaviors of Inky, Pinky, Blinky and Clyde will not allow them to catch up to you.</p><p>We also have many kinds of indirection, where data must be handed through one protocol to another, in order to reach the intended platform.  I'm not just talking about TCP/IP and routers, but many new layers to the OSI layer cake: encryption, encoding, tunneling and translation.</p><p>Of course, emulation and indirection can go too far.  Imagine playing that ROM copy of Pacman on a MAME built for PPC running on Mac OS X Tiger's Rosetta layer, played through a VNC terminal over SSH via an HTTP proxy.  That's a contrived (but perfectly possible) example, but I see layers and layers of indirection in real operating systems and applications all the time.</p><p>To break "Page's Law," I expect one should focus on reducing the layers of emulation and indirection.</p></htmltext>
<tokenext>When I was a little kid , I saw a new computing device : a Pacman cabinet at the local pinball parlour.Since then , I 've seen dozens of implementations of it , and they fall into two camps : a knockoff that can hardly be called a Pacman-clone , or a full-up 100 \ % authentic duplicate of the original .
Of course the latter is done with emulation .
Every important detail of the old hardware can be emulated so a true ROM copy can be run with the same timing and everything behaves properly .
If you know the proper secret patterns through the maze , then the deterministic behaviors of Inky , Pinky , Blinky and Clyde will not allow them to catch up to you.We also have many kinds of indirection , where data must be handed through one protocol to another , in order to reach the intended platform .
I 'm not just talking about TCP/IP and routers , but many new layers to the OSI layer cake : encryption , encoding , tunneling and translation.Of course , emulation and indirection can go too far .
Imagine playing that ROM copy of Pacman on a MAME built for PPC running on Mac OS X Tiger 's Rosetta layer , played through a VNC terminal over SSH via an HTTP proxy .
That 's a contrived ( but perfectly possible ) example , but I see layers and layers of indirection in real operating systems and applications all the time.To break " Page 's Law , " I expect one should focus on reducing the layers of emulation and indirection .</tokentext>
<sentencetext>When I was a little kid, I saw a new computing device: a Pacman cabinet at the local pinball parlour.Since then, I've seen dozens of implementations of it, and they fall into two camps:  a knockoff that can hardly be called a Pacman-clone, or a full-up 100\% authentic duplicate of the original.
Of course the latter is done with emulation.
Every important detail of the old hardware can be emulated so a true ROM copy can be run with the same timing and everything behaves properly.
If you know the proper secret patterns through the maze, then the deterministic behaviors of Inky, Pinky, Blinky and Clyde will not allow them to catch up to you.We also have many kinds of indirection, where data must be handed through one protocol to another, in order to reach the intended platform.
I'm not just talking about TCP/IP and routers, but many new layers to the OSI layer cake: encryption, encoding, tunneling and translation.Of course, emulation and indirection can go too far.
Imagine playing that ROM copy of Pacman on a MAME built for PPC running on Mac OS X Tiger's Rosetta layer, played through a VNC terminal over SSH via an HTTP proxy.
That's a contrived (but perfectly possible) example, but I see layers and layers of indirection in real operating systems and applications all the time.To break "Page's Law," I expect one should focus on reducing the layers of emulation and indirection.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167259</id>
	<title>Re:Of Course</title>
	<author>Anonymous</author>
	<datestamp>1243867680000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>And this is often the curse of object-oriented programming. Objects carries more data than necessary for many of the uses of the object. Only a few cases exists where all the object data is used.</p></div><p>That sounds like bad software design that isn't specific to OO programming.  People are perfectly capable of wasting memory space and CPU cycles in any programming style.</p><p>For example, I worked with "senior" (~15 years on the job) C programmers who thought it was a good idea to use fixed-size global static arrays for everything.  They also couldn't grasp why their O(N^2) algorithm--which was SO fast on a small test data set--ran so slowly when used on real-world data with thousands of items.</p></div>
	</htmltext>
<tokenext>And this is often the curse of object-oriented programming .
Objects carries more data than necessary for many of the uses of the object .
Only a few cases exists where all the object data is used.That sounds like bad software design that is n't specific to OO programming .
People are perfectly capable of wasting memory space and CPU cycles in any programming style.For example , I worked with " senior " ( ~ 15 years on the job ) C programmers who thought it was a good idea to use fixed-size global static arrays for everything .
They also could n't grasp why their O ( N ^ 2 ) algorithm--which was SO fast on a small test data set--ran so slowly when used on real-world data with thousands of items .</tokentext>
<sentencetext>And this is often the curse of object-oriented programming.
Objects carries more data than necessary for many of the uses of the object.
Only a few cases exists where all the object data is used.That sounds like bad software design that isn't specific to OO programming.
People are perfectly capable of wasting memory space and CPU cycles in any programming style.For example, I worked with "senior" (~15 years on the job) C programmers who thought it was a good idea to use fixed-size global static arrays for everything.
They also couldn't grasp why their O(N^2) algorithm--which was SO fast on a small test data set--ran so slowly when used on real-world data with thousands of items.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167511</id>
	<title>Re:Of Course</title>
	<author>Anonymous</author>
	<datestamp>1243868880000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>0</modscore>
	<htmltext><blockquote><div><p>And this is often the curse of object-oriented programming. Objects carries more data than necessary for many of the uses of the object. Only a few cases exists where all the object data is used. A lot of object-oriented programming is somewhat like using 18-wheelers for grocery shopping.</p></div></blockquote><p>I hate to have to be the one to break this to you, but</p><p>you are a retard. (And probably a Real Programmer too, or at least what passes for one these days)</p><p>There are a <b>lot</b> of programs with excessive memory usage that don't use object-oriented languages, and there's a <b>lot</b> of programs with proper memory usage that do use object-oriented languages. Programmer skill (or lack thereof) is far more of a contributing factor, to such a degree that tiny bits of overhead from using OO is lost in the noise.</p><p>If I had to choose one single thing as "the curse of OOP", it'd probably instead be that it makes it far too easy to add needless complexity and abstraction and class hierarchies a fucking mile deep.</p></div>
	</htmltext>
<tokenext>And this is often the curse of object-oriented programming .
Objects carries more data than necessary for many of the uses of the object .
Only a few cases exists where all the object data is used .
A lot of object-oriented programming is somewhat like using 18-wheelers for grocery shopping.I hate to have to be the one to break this to you , butyou are a retard .
( And probably a Real Programmer too , or at least what passes for one these days ) There are a lot of programs with excessive memory usage that do n't use object-oriented languages , and there 's a lot of programs with proper memory usage that do use object-oriented languages .
Programmer skill ( or lack thereof ) is far more of a contributing factor , to such a degree that tiny bits of overhead from using OO is lost in the noise.If I had to choose one single thing as " the curse of OOP " , it 'd probably instead be that it makes it far too easy to add needless complexity and abstraction and class hierarchies a fucking mile deep .</tokentext>
<sentencetext>And this is often the curse of object-oriented programming.
Objects carries more data than necessary for many of the uses of the object.
Only a few cases exists where all the object data is used.
A lot of object-oriented programming is somewhat like using 18-wheelers for grocery shopping.I hate to have to be the one to break this to you, butyou are a retard.
(And probably a Real Programmer too, or at least what passes for one these days)There are a lot of programs with excessive memory usage that don't use object-oriented languages, and there's a lot of programs with proper memory usage that do use object-oriented languages.
Programmer skill (or lack thereof) is far more of a contributing factor, to such a degree that tiny bits of overhead from using OO is lost in the noise.If I had to choose one single thing as "the curse of OOP", it'd probably instead be that it makes it far too easy to add needless complexity and abstraction and class hierarchies a fucking mile deep.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166735</id>
	<title>I don't think that holds up</title>
	<author>viyh</author>
	<datestamp>1243864920000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>"Page's Law" seems to be a tongue in cheek joke since it's sited primarily by the Google folks themselves. It definitely isn't true across the board. It's purely a matter of a) what the software application is and b) how the project is managed/developed. If the application is something like a web browser where web standards are constantly being changed and updated so the software must follow in suit, I could see where "Page's Law" might be true. But if the product is well managed and code isn't constantly grandfathered in (i.e., the developers know when to start from scratch) then it wouldn't necessarily be a problem.</htmltext>
<tokenext>" Page 's Law " seems to be a tongue in cheek joke since it 's sited primarily by the Google folks themselves .
It definitely is n't true across the board .
It 's purely a matter of a ) what the software application is and b ) how the project is managed/developed .
If the application is something like a web browser where web standards are constantly being changed and updated so the software must follow in suit , I could see where " Page 's Law " might be true .
But if the product is well managed and code is n't constantly grandfathered in ( i.e. , the developers know when to start from scratch ) then it would n't necessarily be a problem .</tokentext>
<sentencetext>"Page's Law" seems to be a tongue in cheek joke since it's sited primarily by the Google folks themselves.
It definitely isn't true across the board.
It's purely a matter of a) what the software application is and b) how the project is managed/developed.
If the application is something like a web browser where web standards are constantly being changed and updated so the software must follow in suit, I could see where "Page's Law" might be true.
But if the product is well managed and code isn't constantly grandfathered in (i.e., the developers know when to start from scratch) then it wouldn't necessarily be a problem.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28183431</id>
	<title>PAge's law?</title>
	<author>geekoid</author>
	<datestamp>1243961580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>What a pathetic attempt to get known like Moore.</p><p>It's not true.<br>In effect they are saying no software gets faster when the number of transistors within a specific area doubles.<br>Stupid.</p><p>The other side is "When adding more features software has more to do!" No. Shit.</p><p>How much work is being down? THAT'S the only relevant metric.</p><p>Google has clearly peaked.</p></htmltext>
<tokenext>What a pathetic attempt to get known like Moore.It 's not true.In effect they are saying no software gets faster when the number of transistors within a specific area doubles.Stupid.The other side is " When adding more features software has more to do !
" No .
Shit.How much work is being down ?
THAT 'S the only relevant metric.Google has clearly peaked .</tokentext>
<sentencetext>What a pathetic attempt to get known like Moore.It's not true.In effect they are saying no software gets faster when the number of transistors within a specific area doubles.Stupid.The other side is "When adding more features software has more to do!
" No.
Shit.How much work is being down?
THAT'S the only relevant metric.Google has clearly peaked.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28184761</id>
	<title>Does it even apply outside of Google?</title>
	<author>Anonymous</author>
	<datestamp>1243967220000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I honestly dont think so...</p><p>Unless you have monkeys mindlessly adding features until the HW grinds to a halt. Then they discover backspace button and cycle repeats.</p></htmltext>
<tokenext>I honestly dont think so...Unless you have monkeys mindlessly adding features until the HW grinds to a halt .
Then they discover backspace button and cycle repeats .</tokentext>
<sentencetext>I honestly dont think so...Unless you have monkeys mindlessly adding features until the HW grinds to a halt.
Then they discover backspace button and cycle repeats.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167069</id>
	<title>Grosch's (other) Law</title>
	<author>Anonymous</author>
	<datestamp>1243866840000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>Herb Grosch said it in the 1960's: Anything the hardware boys come up with, the software boys will piss away.</p></htmltext>
<tokenext>Herb Grosch said it in the 1960 's : Anything the hardware boys come up with , the software boys will piss away .</tokentext>
<sentencetext>Herb Grosch said it in the 1960's: Anything the hardware boys come up with, the software boys will piss away.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28171891</id>
	<title>Re:Of Course</title>
	<author>Twinbee</author>
	<datestamp>1243887480000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Since C structs are effectively C++ objects, would you be against using structs for the same reasons too?</p></htmltext>
<tokenext>Since C structs are effectively C + + objects , would you be against using structs for the same reasons too ?</tokentext>
<sentencetext>Since C structs are effectively C++ objects, would you be against using structs for the same reasons too?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28169151</id>
	<title>Re:Of Course</title>
	<author>Anonymous</author>
	<datestamp>1243875720000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Your implication that OO is unsuitable in many circumstances is extremely misleading. For simple scripts, sure, forget it. But show me any complex system that can't be done well with OO.</p></htmltext>
<tokenext>Your implication that OO is unsuitable in many circumstances is extremely misleading .
For simple scripts , sure , forget it .
But show me any complex system that ca n't be done well with OO .</tokentext>
<sentencetext>Your implication that OO is unsuitable in many circumstances is extremely misleading.
For simple scripts, sure, forget it.
But show me any complex system that can't be done well with OO.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168151</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167043</id>
	<title>Meanwhile Looser's Law *is* broken...</title>
	<author>Dystopian Rebel</author>
	<datestamp>1243866660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>From the transcript of the speech:</p><p>"you never loose a dream"</p></htmltext>
<tokenext>From the transcript of the speech : " you never loose a dream "</tokentext>
<sentencetext>From the transcript of the speech:"you never loose a dream"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167375</id>
	<title>Ask Apple how they do it.</title>
	<author>toby</author>
	<datestamp>1243868220000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>10.0, 10.1, 10.2, 10.3, and maybe 10.4 was a series of releases where performance improved with each update. I don't run 10.5 so can't comment if the trend continues.</p></htmltext>
<tokenext>10.0 , 10.1 , 10.2 , 10.3 , and maybe 10.4 was a series of releases where performance improved with each update .
I do n't run 10.5 so ca n't comment if the trend continues .</tokentext>
<sentencetext>10.0, 10.1, 10.2, 10.3, and maybe 10.4 was a series of releases where performance improved with each update.
I don't run 10.5 so can't comment if the trend continues.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28175607</id>
	<title>Re:Of Course</title>
	<author>billcopc</author>
	<datestamp>1243859580000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>That's still about 100 times more memory than is required to edit a text file.  How do you think people got by in the 286 days when 640 Kb was standard ?  Does vim allocate ridiculously oversized buffers just to show a blank screen ?</p><p>I don't mean to pick on vim specifically, all software is guilty of this pointless bloat.  Instead of having tiny apps that load and run at lightning speed, we continue to build these sloppy behemoths that can't accomplish the simplest things without triggering a dozen page faults and diddling some redundant spinlocks.  It's fine to add media to make things esthetically pleasing, but code bloat benefits no one.</p><p>With today's hardware and its ludicrous speed, we should be adding intentional delays to our code, because it should be running so damned fast that usability would suffer.  The user should be the bottleneck, not the software.  We have machines that are literally a thousand times faster than that heavy old 286, yet the load times for today's software are longer than booting Wordperfect 5.1 from a 360k floppy.</p></htmltext>
<tokenext>That 's still about 100 times more memory than is required to edit a text file .
How do you think people got by in the 286 days when 640 Kb was standard ?
Does vim allocate ridiculously oversized buffers just to show a blank screen ? I do n't mean to pick on vim specifically , all software is guilty of this pointless bloat .
Instead of having tiny apps that load and run at lightning speed , we continue to build these sloppy behemoths that ca n't accomplish the simplest things without triggering a dozen page faults and diddling some redundant spinlocks .
It 's fine to add media to make things esthetically pleasing , but code bloat benefits no one.With today 's hardware and its ludicrous speed , we should be adding intentional delays to our code , because it should be running so damned fast that usability would suffer .
The user should be the bottleneck , not the software .
We have machines that are literally a thousand times faster than that heavy old 286 , yet the load times for today 's software are longer than booting Wordperfect 5.1 from a 360k floppy .</tokentext>
<sentencetext>That's still about 100 times more memory than is required to edit a text file.
How do you think people got by in the 286 days when 640 Kb was standard ?
Does vim allocate ridiculously oversized buffers just to show a blank screen ?I don't mean to pick on vim specifically, all software is guilty of this pointless bloat.
Instead of having tiny apps that load and run at lightning speed, we continue to build these sloppy behemoths that can't accomplish the simplest things without triggering a dozen page faults and diddling some redundant spinlocks.
It's fine to add media to make things esthetically pleasing, but code bloat benefits no one.With today's hardware and its ludicrous speed, we should be adding intentional delays to our code, because it should be running so damned fast that usability would suffer.
The user should be the bottleneck, not the software.
We have machines that are literally a thousand times faster than that heavy old 286, yet the load times for today's software are longer than booting Wordperfect 5.1 from a 360k floppy.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168711</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28184117</id>
	<title>Godfather's Law</title>
	<author>GodfatherofSoul</author>
	<datestamp>1243964640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>def'n: Ascribe your name to some ambiguous phenomenon with barely enough repeated occurrence to be defined and immortalize yourself in the annals of internet history.  If your code is getting 2x as slow every 18 months, you need to pursue a new career.</htmltext>
<tokenext>def'n : Ascribe your name to some ambiguous phenomenon with barely enough repeated occurrence to be defined and immortalize yourself in the annals of internet history .
If your code is getting 2x as slow every 18 months , you need to pursue a new career .</tokentext>
<sentencetext>def'n: Ascribe your name to some ambiguous phenomenon with barely enough repeated occurrence to be defined and immortalize yourself in the annals of internet history.
If your code is getting 2x as slow every 18 months, you need to pursue a new career.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28171181</id>
	<title>Already broken</title>
	<author>Co0Ps</author>
	<datestamp>1243885020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>It's already broken. Microsofts software gets twice as slow every 9 months.</htmltext>
<tokenext>It 's already broken .
Microsofts software gets twice as slow every 9 months .</tokentext>
<sentencetext>It's already broken.
Microsofts software gets twice as slow every 9 months.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168691</id>
	<title>What Intel Giveth, Microsoft Taketh Away</title>
	<author>Anonymous</author>
	<datestamp>1243873860000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Is a another version.</htmltext>
<tokenext>Is a another version .</tokentext>
<sentencetext>Is a another version.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168159</id>
	<title>Re:Of Course</title>
	<author>hedwards</author>
	<datestamp>1243871580000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext>That's definitely a large part of the problem, but probably the bigger problem is just the operating assumption that we can add more features just because tomorrows hardware will handle it. In most cases I would rather have the ability to add a plug in or extension for things which are less commonly done with an application than have everything tossed in by default.<br> <br>

Why this is news is beyond me, I seem to remember people complaining about MS doing that sort of thing years ago. Just because the hardware can handle it doesn't mean that it should, tasks should be taking less time as new advancements are going, adding complexity is only reasonable when it does a better job.</htmltext>
<tokenext>That 's definitely a large part of the problem , but probably the bigger problem is just the operating assumption that we can add more features just because tomorrows hardware will handle it .
In most cases I would rather have the ability to add a plug in or extension for things which are less commonly done with an application than have everything tossed in by default .
Why this is news is beyond me , I seem to remember people complaining about MS doing that sort of thing years ago .
Just because the hardware can handle it does n't mean that it should , tasks should be taking less time as new advancements are going , adding complexity is only reasonable when it does a better job .</tokentext>
<sentencetext>That's definitely a large part of the problem, but probably the bigger problem is just the operating assumption that we can add more features just because tomorrows hardware will handle it.
In most cases I would rather have the ability to add a plug in or extension for things which are less commonly done with an application than have everything tossed in by default.
Why this is news is beyond me, I seem to remember people complaining about MS doing that sort of thing years ago.
Just because the hardware can handle it doesn't mean that it should, tasks should be taking less time as new advancements are going, adding complexity is only reasonable when it does a better job.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167259</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166951</id>
	<title>Re:Of Course</title>
	<author>Anonymous</author>
	<datestamp>1243866120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I can't speak to emacs, but these says vi is generally vim, which is much much heavier than classic vi. It also does vastly more.</p></htmltext>
<tokenext>I ca n't speak to emacs , but these says vi is generally vim , which is much much heavier than classic vi .
It also does vastly more .</tokentext>
<sentencetext>I can't speak to emacs, but these says vi is generally vim, which is much much heavier than classic vi.
It also does vastly more.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166665</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168531</id>
	<title>Page's Law is really May's Law!</title>
	<author>Winter Lightning</author>
	<datestamp>1243873200000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>"Page's law" is simply a restatement of May's law:</p><p>"Software efficiency halves every 18 months, compensating Moore's Law".</p><p>David May is a British Computer scientist who was the lead architect for the Transputer.  See:<br><a href="http://en.wikipedia.org/wiki/David\_May\_(computer\_scientist)" title="wikipedia.org" rel="nofollow">http://en.wikipedia.org/wiki/David\_May\_(computer\_scientist)</a> [wikipedia.org]<br>and page 20 of:<br><a href="http://www.cs.bris.ac.uk/~dave/iee.pdf" title="bris.ac.uk" rel="nofollow">http://www.cs.bris.ac.uk/~dave/iee.pdf</a> [bris.ac.uk]</p></htmltext>
<tokenext>" Page 's law " is simply a restatement of May 's law : " Software efficiency halves every 18 months , compensating Moore 's Law " .David May is a British Computer scientist who was the lead architect for the Transputer .
See : http : //en.wikipedia.org/wiki/David \ _May \ _ ( computer \ _scientist ) [ wikipedia.org ] and page 20 of : http : //www.cs.bris.ac.uk/ ~ dave/iee.pdf [ bris.ac.uk ]</tokentext>
<sentencetext>"Page's law" is simply a restatement of May's law:"Software efficiency halves every 18 months, compensating Moore's Law".David May is a British Computer scientist who was the lead architect for the Transputer.
See:http://en.wikipedia.org/wiki/David\_May\_(computer\_scientist) [wikipedia.org]and page 20 of:http://www.cs.bris.ac.uk/~dave/iee.pdf [bris.ac.uk]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28191845</id>
	<title>Re:Of Course</title>
	<author>SL Baur</author>
	<datestamp>1243963740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>This often explains why old languages like C, Cobol etc. are able to do the same thing as a program written in C++, Java or C# at the fraction of the resource cost and at much greater speed.</p></div><p>C is very much "sugar coated assembly language".  As a very low level language it allows with dispensing will all unnecessary language features.  COBOL was never a language touted for its speed, a better example is Ada.  While not quite as low level as C, it has come a remarkably long ways as compiler technology has improved and it *was* designed with performance as a goal.</p><p><div class="quote"><p>The disadvantage is that the old languages require more skills from the programmer to avoid the classical problems of deadlocks and race conditions as well as having to implement functionality for linked lists etc.</p></div><p>Those are strange examples.  C++ and Java have no special provisions to address deadlocks (out of the languages listed so far, only Ada attempted to address that with its rendezvous based parallelism).  Any real world parallel application is prone to race conditions no matter what language it's written in.  Linked lists as basic type are common in functional languages and there's no doubt about it, it's a lot more convenient to deal with lists as a basic type not a library.</p><p><div class="quote"><p>A lot of object-oriented programming is somewhat like using 18-wheelers for grocery shopping.</p></div><p>Maybe you should have stopped there.</p><p>I think the internet would be better off with something more akin to Ada as a base language than anything else.  The way the language regards data coming from the outside world with utter contempt and horror is exactly the right kind of attitude you need to develop safe web applications.</p></div>
	</htmltext>
<tokenext>This often explains why old languages like C , Cobol etc .
are able to do the same thing as a program written in C + + , Java or C # at the fraction of the resource cost and at much greater speed.C is very much " sugar coated assembly language " .
As a very low level language it allows with dispensing will all unnecessary language features .
COBOL was never a language touted for its speed , a better example is Ada .
While not quite as low level as C , it has come a remarkably long ways as compiler technology has improved and it * was * designed with performance as a goal.The disadvantage is that the old languages require more skills from the programmer to avoid the classical problems of deadlocks and race conditions as well as having to implement functionality for linked lists etc.Those are strange examples .
C + + and Java have no special provisions to address deadlocks ( out of the languages listed so far , only Ada attempted to address that with its rendezvous based parallelism ) .
Any real world parallel application is prone to race conditions no matter what language it 's written in .
Linked lists as basic type are common in functional languages and there 's no doubt about it , it 's a lot more convenient to deal with lists as a basic type not a library.A lot of object-oriented programming is somewhat like using 18-wheelers for grocery shopping.Maybe you should have stopped there.I think the internet would be better off with something more akin to Ada as a base language than anything else .
The way the language regards data coming from the outside world with utter contempt and horror is exactly the right kind of attitude you need to develop safe web applications .</tokentext>
<sentencetext>This often explains why old languages like C, Cobol etc.
are able to do the same thing as a program written in C++, Java or C# at the fraction of the resource cost and at much greater speed.C is very much "sugar coated assembly language".
As a very low level language it allows with dispensing will all unnecessary language features.
COBOL was never a language touted for its speed, a better example is Ada.
While not quite as low level as C, it has come a remarkably long ways as compiler technology has improved and it *was* designed with performance as a goal.The disadvantage is that the old languages require more skills from the programmer to avoid the classical problems of deadlocks and race conditions as well as having to implement functionality for linked lists etc.Those are strange examples.
C++ and Java have no special provisions to address deadlocks (out of the languages listed so far, only Ada attempted to address that with its rendezvous based parallelism).
Any real world parallel application is prone to race conditions no matter what language it's written in.
Linked lists as basic type are common in functional languages and there's no doubt about it, it's a lot more convenient to deal with lists as a basic type not a library.A lot of object-oriented programming is somewhat like using 18-wheelers for grocery shopping.Maybe you should have stopped there.I think the internet would be better off with something more akin to Ada as a base language than anything else.
The way the language regards data coming from the outside world with utter contempt and horror is exactly the right kind of attitude you need to develop safe web applications.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28175657</id>
	<title>Agreed, 110\% (OO has the downsides you noted)</title>
	<author>Anonymous</author>
	<datestamp>1243860060000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><div class="quote"><p><b>"This often explains why old languages like C, Cobol etc. are able to do the same thing as a program written in C++, Java or C# at the fraction of the resource cost and at much greater speed. The disadvantage is that the old languages require more skills from the programmer"</b> - by Z00L00K (682162) on Monday June 01, @09:24AM (#28166975) Homepage</p></div><p>True! Because, iirc? For every object you instance, it's an added 472 bytes of memory used by said object, @ least in Win32 PE environs (such as Microsoft &amp; Borland compilers (I prefer the latter, Delphi being my fav. even to this very day).</p><p>NOW - Though that might not seem like a lot, you have to consider that the gui alone is probably composed of N objects, &amp; whatever classes you make full-blown objects will be adding additional overheads, per each one created.</p><p>(Plus, Hey - I don't need an object-oriented design to do a "Hello World" level (meaning simpler/smaller) program:  Procedural programming does the job nicely!)</p><p>APK</p></div>
	</htmltext>
<tokenext>" This often explains why old languages like C , Cobol etc .
are able to do the same thing as a program written in C + + , Java or C # at the fraction of the resource cost and at much greater speed .
The disadvantage is that the old languages require more skills from the programmer " - by Z00L00K ( 682162 ) on Monday June 01 , @ 09 : 24AM ( # 28166975 ) HomepageTrue !
Because , iirc ?
For every object you instance , it 's an added 472 bytes of memory used by said object , @ least in Win32 PE environs ( such as Microsoft &amp; Borland compilers ( I prefer the latter , Delphi being my fav .
even to this very day ) .NOW - Though that might not seem like a lot , you have to consider that the gui alone is probably composed of N objects , &amp; whatever classes you make full-blown objects will be adding additional overheads , per each one created .
( Plus , Hey - I do n't need an object-oriented design to do a " Hello World " level ( meaning simpler/smaller ) program : Procedural programming does the job nicely !
) APK</tokentext>
<sentencetext>"This often explains why old languages like C, Cobol etc.
are able to do the same thing as a program written in C++, Java or C# at the fraction of the resource cost and at much greater speed.
The disadvantage is that the old languages require more skills from the programmer" - by Z00L00K (682162) on Monday June 01, @09:24AM (#28166975) HomepageTrue!
Because, iirc?
For every object you instance, it's an added 472 bytes of memory used by said object, @ least in Win32 PE environs (such as Microsoft &amp; Borland compilers (I prefer the latter, Delphi being my fav.
even to this very day).NOW - Though that might not seem like a lot, you have to consider that the gui alone is probably composed of N objects, &amp; whatever classes you make full-blown objects will be adding additional overheads, per each one created.
(Plus, Hey - I don't need an object-oriented design to do a "Hello World" level (meaning simpler/smaller) program:  Procedural programming does the job nicely!
)APK
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167767</id>
	<title>Puh-lease</title>
	<author>Anonymous</author>
	<datestamp>1243869960000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>"Page's law"?  Not too egotistical is he?  I guess by stating the obvious that make it his idea.</p></htmltext>
<tokenext>" Page 's law " ?
Not too egotistical is he ?
I guess by stating the obvious that make it his idea .</tokentext>
<sentencetext>"Page's law"?
Not too egotistical is he?
I guess by stating the obvious that make it his idea.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167949</id>
	<title>Theoretically, yes. Practically, not often</title>
	<author>jollyreaper</author>
	<datestamp>1243870620000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Business managers don't want to pay for great when good will do. Have you gotten the beta to compile yet? Good, we're shipping. I don't care if it was a tech demo, I don't care if you said your plan was to figure out how to do it first, then go back through and do it right. We have a deadline, get your ass in gear.</p><p>Then the next release cycle comes around and they want more features, cram them in, or fuck it we'll just outsource it to India. We don't know how to write a decent design spec and so even if the Indians are good programmers, the language barrier and cluelessness will lead to disaster.</p><p>And here's the real kicker -- why bother to write better when people buy new computers every three years? We'll just throw hardware at the problem. == this is the factor that's likely to change the game.</p><p>If you look at consoles, games typically get better the longer it's on the market because programmers become more familiar with the platform and what it can do. You're not throwing more hardware at the problem, not until the new console ships. That could be years and years away, just for the shipping, and even more years until there's decent market penetration. No, you have to do something wonderful and new and it has to be done on the current hardware. You're forced to get creative.</p><p>With the push towards netbooks and relatively low-power systems (low-power by today's standards!), programmers won't be able to count on power outstripping bloat. They'll have to concentrate on efficiency or else they won't have a product.</p><p>There's also the question of how much the effort is worth. $5000 in damage to my current car totals it, even if it could be be repaired. I can go out and buy a new car. In Cuba, there's no such thing as a new car, there's only so many on the market. (are they able to import any these days?) Anyway, that explains why the 1950's disposable rustbuckets are still up and running. When no new cars are available for love or money, the effort in keeping an old one running pays for itself.</p><p>Excellence has to be a priority coming down from the top in a company. If cut-rate expediency is the order of the day, crap will be the result.</p></htmltext>
<tokenext>Business managers do n't want to pay for great when good will do .
Have you gotten the beta to compile yet ?
Good , we 're shipping .
I do n't care if it was a tech demo , I do n't care if you said your plan was to figure out how to do it first , then go back through and do it right .
We have a deadline , get your ass in gear.Then the next release cycle comes around and they want more features , cram them in , or fuck it we 'll just outsource it to India .
We do n't know how to write a decent design spec and so even if the Indians are good programmers , the language barrier and cluelessness will lead to disaster.And here 's the real kicker -- why bother to write better when people buy new computers every three years ?
We 'll just throw hardware at the problem .
= = this is the factor that 's likely to change the game.If you look at consoles , games typically get better the longer it 's on the market because programmers become more familiar with the platform and what it can do .
You 're not throwing more hardware at the problem , not until the new console ships .
That could be years and years away , just for the shipping , and even more years until there 's decent market penetration .
No , you have to do something wonderful and new and it has to be done on the current hardware .
You 're forced to get creative.With the push towards netbooks and relatively low-power systems ( low-power by today 's standards !
) , programmers wo n't be able to count on power outstripping bloat .
They 'll have to concentrate on efficiency or else they wo n't have a product.There 's also the question of how much the effort is worth .
$ 5000 in damage to my current car totals it , even if it could be be repaired .
I can go out and buy a new car .
In Cuba , there 's no such thing as a new car , there 's only so many on the market .
( are they able to import any these days ?
) Anyway , that explains why the 1950 's disposable rustbuckets are still up and running .
When no new cars are available for love or money , the effort in keeping an old one running pays for itself.Excellence has to be a priority coming down from the top in a company .
If cut-rate expediency is the order of the day , crap will be the result .</tokentext>
<sentencetext>Business managers don't want to pay for great when good will do.
Have you gotten the beta to compile yet?
Good, we're shipping.
I don't care if it was a tech demo, I don't care if you said your plan was to figure out how to do it first, then go back through and do it right.
We have a deadline, get your ass in gear.Then the next release cycle comes around and they want more features, cram them in, or fuck it we'll just outsource it to India.
We don't know how to write a decent design spec and so even if the Indians are good programmers, the language barrier and cluelessness will lead to disaster.And here's the real kicker -- why bother to write better when people buy new computers every three years?
We'll just throw hardware at the problem.
== this is the factor that's likely to change the game.If you look at consoles, games typically get better the longer it's on the market because programmers become more familiar with the platform and what it can do.
You're not throwing more hardware at the problem, not until the new console ships.
That could be years and years away, just for the shipping, and even more years until there's decent market penetration.
No, you have to do something wonderful and new and it has to be done on the current hardware.
You're forced to get creative.With the push towards netbooks and relatively low-power systems (low-power by today's standards!
), programmers won't be able to count on power outstripping bloat.
They'll have to concentrate on efficiency or else they won't have a product.There's also the question of how much the effort is worth.
$5000 in damage to my current car totals it, even if it could be be repaired.
I can go out and buy a new car.
In Cuba, there's no such thing as a new car, there's only so many on the market.
(are they able to import any these days?
) Anyway, that explains why the 1950's disposable rustbuckets are still up and running.
When no new cars are available for love or money, the effort in keeping an old one running pays for itself.Excellence has to be a priority coming down from the top in a company.
If cut-rate expediency is the order of the day, crap will be the result.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28178429</id>
	<title>Re:Of Course</title>
	<author>Anonymous</author>
	<datestamp>1243885860000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Now  listen here nigger:  I'm just about <b>SICK TO</b> =&gt; <i>death</i> of your dumbass faggotry.</p><p>This paragraph makes no goddamn sense!  Look at me, I'm APK.</p><p>Listen here motherfucker:  I know where to find you.  And I'm watching</p><p><tt>Yuri Klastalov</tt></p></htmltext>
<tokenext>Now listen here nigger : I 'm just about SICK TO = &gt; death of your dumbass faggotry.This paragraph makes no goddamn sense !
Look at me , I 'm APK.Listen here motherfucker : I know where to find you .
And I 'm watchingYuri Klastalov</tokentext>
<sentencetext>Now  listen here nigger:  I'm just about SICK TO =&gt; death of your dumbass faggotry.This paragraph makes no goddamn sense!
Look at me, I'm APK.Listen here motherfucker:  I know where to find you.
And I'm watchingYuri Klastalov</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28171081</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28171063</id>
	<title>Sure it can</title>
	<author>Tired and Emotional</author>
	<datestamp>1243884660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>But you have to care about it. You need to test it but that is not enough.
<p>
The problem is performance creep. Its easy to find big slowdowns with regression analysis, but you get perhaps as much a 5\% variations in timings just depending on the phase of the moon. So any slowdown less than, say, 5\% is not discernable from noise. As a result your performance can deteriorate by a few percent per checkin. Over a year that can mount up.
</p><p>
So you have to combat this by actually tuning the software every so often - say once per release - to recover the creep. And, of course, after you do this a couple of times, it gets harder to knock hot spots on the head and you have to do it early in the release cycle as you have to start rearchitecting to actually make a difference.</p></htmltext>
<tokenext>But you have to care about it .
You need to test it but that is not enough .
The problem is performance creep .
Its easy to find big slowdowns with regression analysis , but you get perhaps as much a 5 \ % variations in timings just depending on the phase of the moon .
So any slowdown less than , say , 5 \ % is not discernable from noise .
As a result your performance can deteriorate by a few percent per checkin .
Over a year that can mount up .
So you have to combat this by actually tuning the software every so often - say once per release - to recover the creep .
And , of course , after you do this a couple of times , it gets harder to knock hot spots on the head and you have to do it early in the release cycle as you have to start rearchitecting to actually make a difference .</tokentext>
<sentencetext>But you have to care about it.
You need to test it but that is not enough.
The problem is performance creep.
Its easy to find big slowdowns with regression analysis, but you get perhaps as much a 5\% variations in timings just depending on the phase of the moon.
So any slowdown less than, say, 5\% is not discernable from noise.
As a result your performance can deteriorate by a few percent per checkin.
Over a year that can mount up.
So you have to combat this by actually tuning the software every so often - say once per release - to recover the creep.
And, of course, after you do this a couple of times, it gets harder to knock hot spots on the head and you have to do it early in the release cycle as you have to start rearchitecting to actually make a difference.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975</id>
	<title>Re:Of Course</title>
	<author>Z00L00K</author>
	<datestamp>1243866240000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>The law isn't linear, it's more sawtooth-style.</p><p>Features are added all the time which bogs down the software, and then there is an effort to speed it up and then there are features added again.</p><p>One catch in performance is that it sure is faster to use RAM for data, but there is also a lot of useless data floating around in RAM, which is a waste of resources.</p><p>And this is often the curse of object-oriented programming. Objects carries more data than necessary for many of the uses of the object. Only a few cases exists where all the object data is used. A lot of object-oriented programming is somewhat like using 18-wheelers for grocery shopping.</p><p>This often explains why old languages like C, Cobol etc. are able to do the same thing as a program written in C++, Java or C# at the fraction of the resource cost and at much greater speed. The disadvantage is that the old languages require more skills from the programmer to avoid the classical problems of deadlocks and race conditions as well as having to implement functionality for linked lists etc.</p></htmltext>
<tokenext>The law is n't linear , it 's more sawtooth-style.Features are added all the time which bogs down the software , and then there is an effort to speed it up and then there are features added again.One catch in performance is that it sure is faster to use RAM for data , but there is also a lot of useless data floating around in RAM , which is a waste of resources.And this is often the curse of object-oriented programming .
Objects carries more data than necessary for many of the uses of the object .
Only a few cases exists where all the object data is used .
A lot of object-oriented programming is somewhat like using 18-wheelers for grocery shopping.This often explains why old languages like C , Cobol etc .
are able to do the same thing as a program written in C + + , Java or C # at the fraction of the resource cost and at much greater speed .
The disadvantage is that the old languages require more skills from the programmer to avoid the classical problems of deadlocks and race conditions as well as having to implement functionality for linked lists etc .</tokentext>
<sentencetext>The law isn't linear, it's more sawtooth-style.Features are added all the time which bogs down the software, and then there is an effort to speed it up and then there are features added again.One catch in performance is that it sure is faster to use RAM for data, but there is also a lot of useless data floating around in RAM, which is a waste of resources.And this is often the curse of object-oriented programming.
Objects carries more data than necessary for many of the uses of the object.
Only a few cases exists where all the object data is used.
A lot of object-oriented programming is somewhat like using 18-wheelers for grocery shopping.This often explains why old languages like C, Cobol etc.
are able to do the same thing as a program written in C++, Java or C# at the fraction of the resource cost and at much greater speed.
The disadvantage is that the old languages require more skills from the programmer to avoid the classical problems of deadlocks and race conditions as well as having to implement functionality for linked lists etc.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166665</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167341</id>
	<title>Larger user base</title>
	<author>DrWho520</author>
	<datestamp>1243868040000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>Making later versions of software run more efficiently on a baseline piece of hardware may also make the software run more efficiently on lesser pieces of hardware.  Does the increase in possible install base (since your software now runs on hardware slower than your baseline) justify a concerted effort to write software that runs more efficiently?</htmltext>
<tokenext>Making later versions of software run more efficiently on a baseline piece of hardware may also make the software run more efficiently on lesser pieces of hardware .
Does the increase in possible install base ( since your software now runs on hardware slower than your baseline ) justify a concerted effort to write software that runs more efficiently ?</tokentext>
<sentencetext>Making later versions of software run more efficiently on a baseline piece of hardware may also make the software run more efficiently on lesser pieces of hardware.
Does the increase in possible install base (since your software now runs on hardware slower than your baseline) justify a concerted effort to write software that runs more efficiently?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28172355</id>
	<title>Perfect Example of Page's law</title>
	<author>Khyber</author>
	<datestamp>1243889160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Camfrog. When it was version 3.2, it was just a 4 meg download. Now it's version 5.3, it's nearly 11 megs, and there hasn't been much of any improvement at all. In fact, it runs WORSE, crashes mroe often, and the only thing they did was add the ability to put stupid eyes over your face and a few shiny UI improvements.</p><p>Page's Law will cease to be a law when A. Software/code patents are invalidated and B. programmers, thus free from the nonsense of software/code patents, code as efficiently as possible.</p><p>After all, some random programmers can make a fully 3D game in 64KB of code. Why can't 'professional' programmers do the same thing for their video chat programs, huh?</p></htmltext>
<tokenext>Camfrog .
When it was version 3.2 , it was just a 4 meg download .
Now it 's version 5.3 , it 's nearly 11 megs , and there has n't been much of any improvement at all .
In fact , it runs WORSE , crashes mroe often , and the only thing they did was add the ability to put stupid eyes over your face and a few shiny UI improvements.Page 's Law will cease to be a law when A. Software/code patents are invalidated and B. programmers , thus free from the nonsense of software/code patents , code as efficiently as possible.After all , some random programmers can make a fully 3D game in 64KB of code .
Why ca n't 'professional ' programmers do the same thing for their video chat programs , huh ?</tokentext>
<sentencetext>Camfrog.
When it was version 3.2, it was just a 4 meg download.
Now it's version 5.3, it's nearly 11 megs, and there hasn't been much of any improvement at all.
In fact, it runs WORSE, crashes mroe often, and the only thing they did was add the ability to put stupid eyes over your face and a few shiny UI improvements.Page's Law will cease to be a law when A. Software/code patents are invalidated and B. programmers, thus free from the nonsense of software/code patents, code as efficiently as possible.After all, some random programmers can make a fully 3D game in 64KB of code.
Why can't 'professional' programmers do the same thing for their video chat programs, huh?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166787</id>
	<title>Speaking of hardware power to waste</title>
	<author>Ilgaz</author>
	<datestamp>1243865280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I wasn't talking about it for a while as I am tired of Google fanatics but, what is the point of running a software with Administrator(win)/Super User(Mac) privileges every 2 hours that will... check for updates?</p><p>I speak about the Google Updater and I don't really CARE if it is open source or not.</p><p>Not just that, you are giving a very bad example to industry to use as reference. They already started talking about ''but Google does it''.</p><p>Is that part of the excuse? Because hardware guys beat the badly designed software coded by some re-invent wheel guys? Does something run in your server farms opening a socket to the outside World every 2 hours that will check for updates?</p><p>Listen, people purchasing $1400 software are bugged about their paid commercial software checking for updates yet alone it does only check weekly and \_if application runs\_. We don't have hardware to waste or some top certified security engineers to waste. Stop thinking everyone has some undocumentedly large server farms like you.</p></htmltext>
<tokenext>I was n't talking about it for a while as I am tired of Google fanatics but , what is the point of running a software with Administrator ( win ) /Super User ( Mac ) privileges every 2 hours that will... check for updates ? I speak about the Google Updater and I do n't really CARE if it is open source or not.Not just that , you are giving a very bad example to industry to use as reference .
They already started talking about ''but Google does it''.Is that part of the excuse ?
Because hardware guys beat the badly designed software coded by some re-invent wheel guys ?
Does something run in your server farms opening a socket to the outside World every 2 hours that will check for updates ? Listen , people purchasing $ 1400 software are bugged about their paid commercial software checking for updates yet alone it does only check weekly and \ _if application runs \ _ .
We do n't have hardware to waste or some top certified security engineers to waste .
Stop thinking everyone has some undocumentedly large server farms like you .</tokentext>
<sentencetext>I wasn't talking about it for a while as I am tired of Google fanatics but, what is the point of running a software with Administrator(win)/Super User(Mac) privileges every 2 hours that will... check for updates?I speak about the Google Updater and I don't really CARE if it is open source or not.Not just that, you are giving a very bad example to industry to use as reference.
They already started talking about ''but Google does it''.Is that part of the excuse?
Because hardware guys beat the badly designed software coded by some re-invent wheel guys?
Does something run in your server farms opening a socket to the outside World every 2 hours that will check for updates?Listen, people purchasing $1400 software are bugged about their paid commercial software checking for updates yet alone it does only check weekly and \_if application runs\_.
We don't have hardware to waste or some top certified security engineers to waste.
Stop thinking everyone has some undocumentedly large server farms like you.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166665</id>
	<title>Of Course</title>
	<author>Anonymous</author>
	<datestamp>1243864440000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p><div class="quote"><p>Can "Page's Law" Be Broken?</p></div><p>I think it gets broken all the time.  At least in my world.  Look at Firefox 3 vs 2.  Seems to be a marked improvement in speed to me.  <br> <br>

And as far as web application containers go, <i>most</i> of them seem to get faster and better at serving up pages.  No, they may not be "twice as fast on twice as fast hardware" but I don't think they are twice as slow every three months.  <br> <br>

I'm certain it happens all the time, you just don't notice that ancient products like VI, Emacs, Lisp interpreters, etc stay pretty damn nimble as hardware takes off into the next century.  People just can't notice an increase in speed when you're waiting on I/O like the user.</p></div>
	</htmltext>
<tokenext>Can " Page 's Law " Be Broken ? I think it gets broken all the time .
At least in my world .
Look at Firefox 3 vs 2 .
Seems to be a marked improvement in speed to me .
And as far as web application containers go , most of them seem to get faster and better at serving up pages .
No , they may not be " twice as fast on twice as fast hardware " but I do n't think they are twice as slow every three months .
I 'm certain it happens all the time , you just do n't notice that ancient products like VI , Emacs , Lisp interpreters , etc stay pretty damn nimble as hardware takes off into the next century .
People just ca n't notice an increase in speed when you 're waiting on I/O like the user .</tokentext>
<sentencetext>Can "Page's Law" Be Broken?I think it gets broken all the time.
At least in my world.
Look at Firefox 3 vs 2.
Seems to be a marked improvement in speed to me.
And as far as web application containers go, most of them seem to get faster and better at serving up pages.
No, they may not be "twice as fast on twice as fast hardware" but I don't think they are twice as slow every three months.
I'm certain it happens all the time, you just don't notice that ancient products like VI, Emacs, Lisp interpreters, etc stay pretty damn nimble as hardware takes off into the next century.
People just can't notice an increase in speed when you're waiting on I/O like the user.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28172815</id>
	<title>Re:Of Course</title>
	<author>LavosPhoenix</author>
	<datestamp>1243847700000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext>yeah, because passing a implicit "this" pointer in C++ and a typed object pointer to a function are so vastly different in storage sizes. Not. And seriously, if you are loading tons of useless data into an object, you've completely missed the point of Object oriented programming in the first place. So don't blame your failure to use logic and reasoning in OOP as a general case that applies to all software.

C# and Java use garbage collection, which involves nondeterministic reclaimation of objects, which will affect performance. Sure, GC may allow for easier lockfree structures, but it simply pushes the delay to the GC, and a longer term storage of the deleted objects which then have to be reclaimed by the properly implemented lockfree GC. It's far more important to make sure your data fits in cache lines to prevent cache trashing, which means that your data has to be reloaded into the CPU's cache from higher level caches, like L3 or RAM. Just take a look at Intel's Thread Building Blocks. None of their concurrent data structures are lockfree, but do make sure to properly allocate objects to fit cache lines.</htmltext>
<tokenext>yeah , because passing a implicit " this " pointer in C + + and a typed object pointer to a function are so vastly different in storage sizes .
Not. And seriously , if you are loading tons of useless data into an object , you 've completely missed the point of Object oriented programming in the first place .
So do n't blame your failure to use logic and reasoning in OOP as a general case that applies to all software .
C # and Java use garbage collection , which involves nondeterministic reclaimation of objects , which will affect performance .
Sure , GC may allow for easier lockfree structures , but it simply pushes the delay to the GC , and a longer term storage of the deleted objects which then have to be reclaimed by the properly implemented lockfree GC .
It 's far more important to make sure your data fits in cache lines to prevent cache trashing , which means that your data has to be reloaded into the CPU 's cache from higher level caches , like L3 or RAM .
Just take a look at Intel 's Thread Building Blocks .
None of their concurrent data structures are lockfree , but do make sure to properly allocate objects to fit cache lines .</tokentext>
<sentencetext>yeah, because passing a implicit "this" pointer in C++ and a typed object pointer to a function are so vastly different in storage sizes.
Not. And seriously, if you are loading tons of useless data into an object, you've completely missed the point of Object oriented programming in the first place.
So don't blame your failure to use logic and reasoning in OOP as a general case that applies to all software.
C# and Java use garbage collection, which involves nondeterministic reclaimation of objects, which will affect performance.
Sure, GC may allow for easier lockfree structures, but it simply pushes the delay to the GC, and a longer term storage of the deleted objects which then have to be reclaimed by the properly implemented lockfree GC.
It's far more important to make sure your data fits in cache lines to prevent cache trashing, which means that your data has to be reloaded into the CPU's cache from higher level caches, like L3 or RAM.
Just take a look at Intel's Thread Building Blocks.
None of their concurrent data structures are lockfree, but do make sure to properly allocate objects to fit cache lines.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168657</id>
	<title>Re:Page's Law.</title>
	<author>Duncan3</author>
	<datestamp>1243873680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Is there anything in history safe from Google claiming credit for it?</p><p>Pretty soon it will be Google Wheel(tm) and Google Fire(tm)</p></htmltext>
<tokenext>Is there anything in history safe from Google claiming credit for it ? Pretty soon it will be Google Wheel ( tm ) and Google Fire ( tm )</tokentext>
<sentencetext>Is there anything in history safe from Google claiming credit for it?Pretty soon it will be Google Wheel(tm) and Google Fire(tm)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166881</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28184971</id>
	<title>Not the same laws</title>
	<author>lie2me</author>
	<datestamp>1243968120000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>"Wirth's law" is more quality related, as in "crappy SW can benefit from faster HW".</p><p>"Gates's Law" is user-side observation, "speed of commercial software generally slows by fifty percent every 18 months thereby negating all the benefits of Moore's Law".</p><p>"Page's Law" is reflection on SW development of a single company: "software gets twice as slow every 18 months... Google plans to reverse this trend and optimize its code."</p><p>I wonder if anyone else noticed these differences.</p></htmltext>
<tokenext>" Wirth 's law " is more quality related , as in " crappy SW can benefit from faster HW " .
" Gates 's Law " is user-side observation , " speed of commercial software generally slows by fifty percent every 18 months thereby negating all the benefits of Moore 's Law " .
" Page 's Law " is reflection on SW development of a single company : " software gets twice as slow every 18 months... Google plans to reverse this trend and optimize its code .
" I wonder if anyone else noticed these differences .</tokentext>
<sentencetext>"Wirth's law" is more quality related, as in "crappy SW can benefit from faster HW".
"Gates's Law" is user-side observation, "speed of commercial software generally slows by fifty percent every 18 months thereby negating all the benefits of Moore's Law".
"Page's Law" is reflection on SW development of a single company: "software gets twice as slow every 18 months... Google plans to reverse this trend and optimize its code.
"I wonder if anyone else noticed these differences.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28174057</id>
	<title>Re:I don't think that holds up</title>
	<author>Anonymous</author>
	<datestamp>1243852080000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Niklaus Wirth said it best, years ago: "Software gets slower faster than hardware gets faster."</p><p>It should be obvious that this means "software as a whole system", not some individual piece of code you can optimize. We want to simulate the whole universe, but unfortunately, the universe is already an optimal representation of itself. So we'll never run out of tasks that suck up computational power. We also want to maximize human productivity, but that means sweeping optimization opportunities under the rug with each increase in abstraction. So our programs will continue to get less efficient overall.</p><p>--agy</p></htmltext>
<tokenext>Niklaus Wirth said it best , years ago : " Software gets slower faster than hardware gets faster .
" It should be obvious that this means " software as a whole system " , not some individual piece of code you can optimize .
We want to simulate the whole universe , but unfortunately , the universe is already an optimal representation of itself .
So we 'll never run out of tasks that suck up computational power .
We also want to maximize human productivity , but that means sweeping optimization opportunities under the rug with each increase in abstraction .
So our programs will continue to get less efficient overall.--agy</tokentext>
<sentencetext>Niklaus Wirth said it best, years ago: "Software gets slower faster than hardware gets faster.
"It should be obvious that this means "software as a whole system", not some individual piece of code you can optimize.
We want to simulate the whole universe, but unfortunately, the universe is already an optimal representation of itself.
So we'll never run out of tasks that suck up computational power.
We also want to maximize human productivity, but that means sweeping optimization opportunities under the rug with each increase in abstraction.
So our programs will continue to get less efficient overall.--agy</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166735</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167027</id>
	<title>Bloat wastes energy.</title>
	<author>miffo.swe</author>
	<datestamp>1243866540000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>One thing that rarely comes up when discussing bloat and slow underperforming applications is energy consumption. While you can shave off some percents off of a server by maximizing hardware energy savings you can save much more by optimizing its software in many cases.</p><p>I think it all comes down to economics. As long as the hardware and software industry lives in symbiosis with their endless upgrade loop we will have to endure this. To have your customers buy the same stuff over and over again is a precious cash cow they wont let go off volontarily.</p></htmltext>
<tokenext>One thing that rarely comes up when discussing bloat and slow underperforming applications is energy consumption .
While you can shave off some percents off of a server by maximizing hardware energy savings you can save much more by optimizing its software in many cases.I think it all comes down to economics .
As long as the hardware and software industry lives in symbiosis with their endless upgrade loop we will have to endure this .
To have your customers buy the same stuff over and over again is a precious cash cow they wont let go off volontarily .</tokentext>
<sentencetext>One thing that rarely comes up when discussing bloat and slow underperforming applications is energy consumption.
While you can shave off some percents off of a server by maximizing hardware energy savings you can save much more by optimizing its software in many cases.I think it all comes down to economics.
As long as the hardware and software industry lives in symbiosis with their endless upgrade loop we will have to endure this.
To have your customers buy the same stuff over and over again is a precious cash cow they wont let go off volontarily.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167415</id>
	<title>We don't want it to be broken, really</title>
	<author>realmolo</author>
	<datestamp>1243868460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Hardware has advanced to the point that we don't care about performance all that much.</p><p>What is more of a concern is how easy it is to write software, and how easy it is to maintain that software, and how easy it is to port that software to other architectures. Efficiency of code generally means efficient use of a single architecture.  That's fine, but for code that has to last a long time (i.e., anything besides games), you want it to be written in a nice, easy-to-change way that can be moved around to different platforms for the next 20 years.</p></htmltext>
<tokenext>Hardware has advanced to the point that we do n't care about performance all that much.What is more of a concern is how easy it is to write software , and how easy it is to maintain that software , and how easy it is to port that software to other architectures .
Efficiency of code generally means efficient use of a single architecture .
That 's fine , but for code that has to last a long time ( i.e. , anything besides games ) , you want it to be written in a nice , easy-to-change way that can be moved around to different platforms for the next 20 years .</tokentext>
<sentencetext>Hardware has advanced to the point that we don't care about performance all that much.What is more of a concern is how easy it is to write software, and how easy it is to maintain that software, and how easy it is to port that software to other architectures.
Efficiency of code generally means efficient use of a single architecture.
That's fine, but for code that has to last a long time (i.e., anything besides games), you want it to be written in a nice, easy-to-change way that can be moved around to different platforms for the next 20 years.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28174955</id>
	<title>Re:Of Course</title>
	<author>cekander</author>
	<datestamp>1243855980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>OO is considered the be-all and end-all of programming</p></div><p>Not by me. I'm still waiting for AI that will take my high-level executive summary and produce an optimal program that achieves my goals. Until then, I suppose I'll accept OO.</p></div>
	</htmltext>
<tokenext>OO is considered the be-all and end-all of programmingNot by me .
I 'm still waiting for AI that will take my high-level executive summary and produce an optimal program that achieves my goals .
Until then , I suppose I 'll accept OO .</tokentext>
<sentencetext>OO is considered the be-all and end-all of programmingNot by me.
I'm still waiting for AI that will take my high-level executive summary and produce an optimal program that achieves my goals.
Until then, I suppose I'll accept OO.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168151</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168935</id>
	<title>10 years experience on this</title>
	<author>mzs</author>
	<datestamp>1243875000000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>I have realized that I will have been working as a dev for ten years now in four days. I've worked at a few places and I think that the reason for this is pretty straight forward, poor benchmarks used poorly.</p><p>We have all heard the mantra about optimizing early is evil but there are two issues to contend with. You get to a crunch time towards the end and then there is no time to address performance issues in every project. By that time so much code is written that you cannot address the performance issues in the most effective way, thinking about what algorithm to use in the dataset that ends-up being the common case. So instead some profiling work gets done and the code goes out the door.</p><p>So for success you need to have some performance measurements even early on. The problem is that in that case you end-up with some benchmarks that don't measure the right thing (that is what you discover near the end) or you have worthless benchmarks that suffer too much from not being reproducible, taking too long to run, or not giving the dev any idea of where the performance problem really is.</p><p>So what ends-up happening is that only after the code base has been around for a while and you get to rev n + 1 is there any real handle on any of this performance stuff. But often what ends-up happening is that project management values feature additions so as long as no single benchmark decreases by more than 2-5\% and the overall performance does not decrease by more than 15\% compared to the pre feature build, it gets the okay. Then a milestone arrives and there is no time again for systematic performance work and it ships as is.</p><p>The right approach would be at that stage to not allow a new feature unless the overall benchmark does not improve by 2\% and to also benchmark your competitors as well but that just does not happen except in the very rare good groups sadly.</p></htmltext>
<tokenext>I have realized that I will have been working as a dev for ten years now in four days .
I 've worked at a few places and I think that the reason for this is pretty straight forward , poor benchmarks used poorly.We have all heard the mantra about optimizing early is evil but there are two issues to contend with .
You get to a crunch time towards the end and then there is no time to address performance issues in every project .
By that time so much code is written that you can not address the performance issues in the most effective way , thinking about what algorithm to use in the dataset that ends-up being the common case .
So instead some profiling work gets done and the code goes out the door.So for success you need to have some performance measurements even early on .
The problem is that in that case you end-up with some benchmarks that do n't measure the right thing ( that is what you discover near the end ) or you have worthless benchmarks that suffer too much from not being reproducible , taking too long to run , or not giving the dev any idea of where the performance problem really is.So what ends-up happening is that only after the code base has been around for a while and you get to rev n + 1 is there any real handle on any of this performance stuff .
But often what ends-up happening is that project management values feature additions so as long as no single benchmark decreases by more than 2-5 \ % and the overall performance does not decrease by more than 15 \ % compared to the pre feature build , it gets the okay .
Then a milestone arrives and there is no time again for systematic performance work and it ships as is.The right approach would be at that stage to not allow a new feature unless the overall benchmark does not improve by 2 \ % and to also benchmark your competitors as well but that just does not happen except in the very rare good groups sadly .</tokentext>
<sentencetext>I have realized that I will have been working as a dev for ten years now in four days.
I've worked at a few places and I think that the reason for this is pretty straight forward, poor benchmarks used poorly.We have all heard the mantra about optimizing early is evil but there are two issues to contend with.
You get to a crunch time towards the end and then there is no time to address performance issues in every project.
By that time so much code is written that you cannot address the performance issues in the most effective way, thinking about what algorithm to use in the dataset that ends-up being the common case.
So instead some profiling work gets done and the code goes out the door.So for success you need to have some performance measurements even early on.
The problem is that in that case you end-up with some benchmarks that don't measure the right thing (that is what you discover near the end) or you have worthless benchmarks that suffer too much from not being reproducible, taking too long to run, or not giving the dev any idea of where the performance problem really is.So what ends-up happening is that only after the code base has been around for a while and you get to rev n + 1 is there any real handle on any of this performance stuff.
But often what ends-up happening is that project management values feature additions so as long as no single benchmark decreases by more than 2-5\% and the overall performance does not decrease by more than 15\% compared to the pre feature build, it gets the okay.
Then a milestone arrives and there is no time again for systematic performance work and it ships as is.The right approach would be at that stage to not allow a new feature unless the overall benchmark does not improve by 2\% and to also benchmark your competitors as well but that just does not happen except in the very rare good groups sadly.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167053</id>
	<title>Yes!</title>
	<author>JamesP</author>
	<datestamp>1243866720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It's simple, just don't use Java</p><p>In a more serious note, my personal opinion is have the developers use and test the programs in slower machines.</p><p>Yes, they can profile the app, etc, but the problem is that it really doesn't create the 'sense of urgency' working in a slow machine does. (Note I'm not saying developers should use slow machines to DEVELOP, but there should be a testing phase in slow machines)</p><p>Also, slower machines produce more obvious profile timings.</p></htmltext>
<tokenext>It 's simple , just do n't use JavaIn a more serious note , my personal opinion is have the developers use and test the programs in slower machines.Yes , they can profile the app , etc , but the problem is that it really does n't create the 'sense of urgency ' working in a slow machine does .
( Note I 'm not saying developers should use slow machines to DEVELOP , but there should be a testing phase in slow machines ) Also , slower machines produce more obvious profile timings .</tokentext>
<sentencetext>It's simple, just don't use JavaIn a more serious note, my personal opinion is have the developers use and test the programs in slower machines.Yes, they can profile the app, etc, but the problem is that it really doesn't create the 'sense of urgency' working in a slow machine does.
(Note I'm not saying developers should use slow machines to DEVELOP, but there should be a testing phase in slow machines)Also, slower machines produce more obvious profile timings.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166921</id>
	<title>Code Bloat? Think twice.</title>
	<author>Anonymous</author>
	<datestamp>1243866000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>We could also consider the possibility that a twice-as-fast computer on a twice-as-fast network pipe produces twice-as-much data which, in order to keep the same perceived speed, must be processed twice-as-quickly by another computer.</p></htmltext>
<tokenext>We could also consider the possibility that a twice-as-fast computer on a twice-as-fast network pipe produces twice-as-much data which , in order to keep the same perceived speed , must be processed twice-as-quickly by another computer .</tokentext>
<sentencetext>We could also consider the possibility that a twice-as-fast computer on a twice-as-fast network pipe produces twice-as-much data which, in order to keep the same perceived speed, must be processed twice-as-quickly by another computer.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167307</id>
	<title>Re:Of Course</title>
	<author>BeardedChimp</author>
	<datestamp>1243867860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>No not linear, in the case of flash its more like an exponential decay.</htmltext>
<tokenext>No not linear , in the case of flash its more like an exponential decay .</tokentext>
<sentencetext>No not linear, in the case of flash its more like an exponential decay.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167773</id>
	<title>Sure it can be broken... just stop upgrading</title>
	<author>Anonymous</author>
	<datestamp>1243870020000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Seriously.... I can count on one hand the can't-live-without software that has changed in the last 10 years.<br>After those 6 or so Apps, the rest is just candy.</p></htmltext>
<tokenext>Seriously.... I can count on one hand the ca n't-live-without software that has changed in the last 10 years.After those 6 or so Apps , the rest is just candy .</tokentext>
<sentencetext>Seriously.... I can count on one hand the can't-live-without software that has changed in the last 10 years.After those 6 or so Apps, the rest is just candy.</sentencetext>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_01_1232206_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167307
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166665
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_01_1232206_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168657
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166881
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_01_1232206_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28169977
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168151
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166665
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_01_1232206_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28176531
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167375
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_01_1232206_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168741
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167511
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166665
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_01_1232206_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28174057
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166735
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_01_1232206_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167535
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166951
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166665
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_01_1232206_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28175607
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168711
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166951
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166665
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_01_1232206_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28191845
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166665
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_01_1232206_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28171891
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166665
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_01_1232206_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168139
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166665
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_01_1232206_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28175657
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166665
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_01_1232206_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28178429
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28171081
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166665
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_01_1232206_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168159
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167259
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166665
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_01_1232206_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168645
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167375
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_01_1232206_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28172105
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167259
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166665
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_01_1232206_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28169089
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167939
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167259
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166665
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_01_1232206_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28169151
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168151
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166665
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_01_1232206_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28174955
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168151
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166665
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_01_1232206_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28173171
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168151
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166665
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_01_1232206_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28181127
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166795
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_01_1232206_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28172815
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166665
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_01_1232206_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168675
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166795
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_01_1232206.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167375
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168645
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28176531
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_01_1232206.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167027
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_01_1232206.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166665
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166951
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167535
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168711
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28175607
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166975
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168139
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28191845
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28171081
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28178429
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28172815
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167307
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167259
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28172105
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167939
----http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28169089
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168159
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28171891
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167511
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168741
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28175657
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168151
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28173171
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28174955
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28169977
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28169151
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_01_1232206.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166795
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168675
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28181127
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_01_1232206.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166735
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28174057
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_01_1232206.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166787
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_01_1232206.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166921
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_01_1232206.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28166881
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28168657
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_01_1232206.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_01_1232206.28167415
</commentlist>
</conversation>
