<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article10_01_14_239229</id>
	<title>Cliff Click's Crash Course In Modern Hardware</title>
	<author>timothy</author>
	<datestamp>1263467940000</datestamp>
	<htmltext>Lord Straxus writes <i>"In this presentation (video) from the JVM Languages Summit 2009, Cliff Click talks about why it's <a href="http://www.infoq.com/presentations/click-crash-course-modern-hardware">almost impossible to tell what an x86 chip is really doing</a> to your code due to all of the crazy kung-fu and ninjitsu it does to your code while it's running. This talk is an excellent drill-down into the internals of the x86 chip, and it's a great way to get an understanding of what really goes on down at the hardware and why certain types of applications run so much faster than other types of applications. Dr. Cliff really knows his stuff!"</i></htmltext>
<tokenext>Lord Straxus writes " In this presentation ( video ) from the JVM Languages Summit 2009 , Cliff Click talks about why it 's almost impossible to tell what an x86 chip is really doing to your code due to all of the crazy kung-fu and ninjitsu it does to your code while it 's running .
This talk is an excellent drill-down into the internals of the x86 chip , and it 's a great way to get an understanding of what really goes on down at the hardware and why certain types of applications run so much faster than other types of applications .
Dr. Cliff really knows his stuff !
"</tokentext>
<sentencetext>Lord Straxus writes "In this presentation (video) from the JVM Languages Summit 2009, Cliff Click talks about why it's almost impossible to tell what an x86 chip is really doing to your code due to all of the crazy kung-fu and ninjitsu it does to your code while it's running.
This talk is an excellent drill-down into the internals of the x86 chip, and it's a great way to get an understanding of what really goes on down at the hardware and why certain types of applications run so much faster than other types of applications.
Dr. Cliff really knows his stuff!
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775344</id>
	<title>This is why x86 everywhere is a bad idea</title>
	<author>Anonymous</author>
	<datestamp>1263490920000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Sure there is tons and tons of x86-friendly code out there but you really don't want it running naked on power sensitive devices such as smart phones? It is trivial these days to query a processor for its capabilities and applications optimized for the desktop and server environments are going to run flat out, partying on every flavor of SSEx available. For x86 to be more than just an also-ran in the mobile world systems need to be able to easily present applications with a VM view of the processor to reign in power hungry apps, IMHO.</p></htmltext>
<tokenext>Sure there is tons and tons of x86-friendly code out there but you really do n't want it running naked on power sensitive devices such as smart phones ?
It is trivial these days to query a processor for its capabilities and applications optimized for the desktop and server environments are going to run flat out , partying on every flavor of SSEx available .
For x86 to be more than just an also-ran in the mobile world systems need to be able to easily present applications with a VM view of the processor to reign in power hungry apps , IMHO .</tokentext>
<sentencetext>Sure there is tons and tons of x86-friendly code out there but you really don't want it running naked on power sensitive devices such as smart phones?
It is trivial these days to query a processor for its capabilities and applications optimized for the desktop and server environments are going to run flat out, partying on every flavor of SSEx available.
For x86 to be more than just an also-ran in the mobile world systems need to be able to easily present applications with a VM view of the processor to reign in power hungry apps, IMHO.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774118</id>
	<title>Re:Premature optimization is evil... and stupid</title>
	<author>Anonymous</author>
	<datestamp>1263480300000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>I think that the premature optimization claims are way overdone.  In the cases where performance does not matter, then sure, make the code as readable as possible and just accept the performance.</p><p>However, sometimes it is known from the beginning of a project that performance is critical and that achieving that performance will be a challenge.  In such cases, I think that it makes sense to design for performance.  That rarely means using shifts to multiply -- it may, however, mean that you design your data structures so that you can pass the data directly into some FFT functions without packing/unpacking the data to some other format that the rest of the functions were written to expect.  It may also mean that your design scale to many cores and that inner loops be heavily optimized and vectorized.  Of course, all of that code should be performance tested during development against the simpler versions.</p><p>Profiling after the fact sounds like a good idea, but what if the code has no real "hotspot"?  What if you find out that you need to redesign the entire software framework to support zero-copy processing of the data?  Also, profiling tools in general are really not that good.  Running oprofile on a large-scale application with dozens of threads and data source dependencies on other processes can be less than enlightening.  gprof is entirely useless for non-trivial applications.  cachegrind is sometimes helpful, but most people working on performance optimization seem to simply build their own timers based on the rdtsc instruction and manually time sections of the code.</p><p>I work on software for processing medical device data and performance is often critical.  You probably want an image display to update very quickly when it is providing feedback to the doctor guiding a catheter toward your heart, for example.  We had one project where the team decided to start over with a clean framework without concern for performance -- they would profile and optimize once everything was working.  They followed the advice of many a software engineer: their framework was very nice, replete with design patterns and applications of generic programming, and entirely unscalable beyond a single processor core.  There were no performance tests done during development, and of course the timeline was such that there would only be minimal time for optimization once the functionality was complete.  The software that it was replacing was ugly, but also scaled nicely to many cores.  The software shipped on a system with two quad-core processors, just as it had before.</p><p>Let's just say that customers were unimpressed with the new software framework.</p></htmltext>
<tokenext>I think that the premature optimization claims are way overdone .
In the cases where performance does not matter , then sure , make the code as readable as possible and just accept the performance.However , sometimes it is known from the beginning of a project that performance is critical and that achieving that performance will be a challenge .
In such cases , I think that it makes sense to design for performance .
That rarely means using shifts to multiply -- it may , however , mean that you design your data structures so that you can pass the data directly into some FFT functions without packing/unpacking the data to some other format that the rest of the functions were written to expect .
It may also mean that your design scale to many cores and that inner loops be heavily optimized and vectorized .
Of course , all of that code should be performance tested during development against the simpler versions.Profiling after the fact sounds like a good idea , but what if the code has no real " hotspot " ?
What if you find out that you need to redesign the entire software framework to support zero-copy processing of the data ?
Also , profiling tools in general are really not that good .
Running oprofile on a large-scale application with dozens of threads and data source dependencies on other processes can be less than enlightening .
gprof is entirely useless for non-trivial applications .
cachegrind is sometimes helpful , but most people working on performance optimization seem to simply build their own timers based on the rdtsc instruction and manually time sections of the code.I work on software for processing medical device data and performance is often critical .
You probably want an image display to update very quickly when it is providing feedback to the doctor guiding a catheter toward your heart , for example .
We had one project where the team decided to start over with a clean framework without concern for performance -- they would profile and optimize once everything was working .
They followed the advice of many a software engineer : their framework was very nice , replete with design patterns and applications of generic programming , and entirely unscalable beyond a single processor core .
There were no performance tests done during development , and of course the timeline was such that there would only be minimal time for optimization once the functionality was complete .
The software that it was replacing was ugly , but also scaled nicely to many cores .
The software shipped on a system with two quad-core processors , just as it had before.Let 's just say that customers were unimpressed with the new software framework .</tokentext>
<sentencetext>I think that the premature optimization claims are way overdone.
In the cases where performance does not matter, then sure, make the code as readable as possible and just accept the performance.However, sometimes it is known from the beginning of a project that performance is critical and that achieving that performance will be a challenge.
In such cases, I think that it makes sense to design for performance.
That rarely means using shifts to multiply -- it may, however, mean that you design your data structures so that you can pass the data directly into some FFT functions without packing/unpacking the data to some other format that the rest of the functions were written to expect.
It may also mean that your design scale to many cores and that inner loops be heavily optimized and vectorized.
Of course, all of that code should be performance tested during development against the simpler versions.Profiling after the fact sounds like a good idea, but what if the code has no real "hotspot"?
What if you find out that you need to redesign the entire software framework to support zero-copy processing of the data?
Also, profiling tools in general are really not that good.
Running oprofile on a large-scale application with dozens of threads and data source dependencies on other processes can be less than enlightening.
gprof is entirely useless for non-trivial applications.
cachegrind is sometimes helpful, but most people working on performance optimization seem to simply build their own timers based on the rdtsc instruction and manually time sections of the code.I work on software for processing medical device data and performance is often critical.
You probably want an image display to update very quickly when it is providing feedback to the doctor guiding a catheter toward your heart, for example.
We had one project where the team decided to start over with a clean framework without concern for performance -- they would profile and optimize once everything was working.
They followed the advice of many a software engineer: their framework was very nice, replete with design patterns and applications of generic programming, and entirely unscalable beyond a single processor core.
There were no performance tests done during development, and of course the timeline was such that there would only be minimal time for optimization once the functionality was complete.
The software that it was replacing was ugly, but also scaled nicely to many cores.
The software shipped on a system with two quad-core processors, just as it had before.Let's just say that customers were unimpressed with the new software framework.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772742</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775772</id>
	<title>An Awesome Assembler</title>
	<author>Taco Cowboy</author>
	<datestamp>1263496200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Here's an awesome assembler that do wonders --- Flat Assembler.</p><p>Download from: <a href="http://flatassembler.net/download.php" title="flatassembler.net">http://flatassembler.net/download.php</a> [flatassembler.net] [flatassembler.net]</p><p>Forum: <a href="http://board.flatassembler.net/index.php" title="flatassembler.net">http://board.flatassembler.net/index.php</a> [flatassembler.net] [flatassembler.net]</p></htmltext>
<tokenext>Here 's an awesome assembler that do wonders --- Flat Assembler.Download from : http : //flatassembler.net/download.php [ flatassembler.net ] [ flatassembler.net ] Forum : http : //board.flatassembler.net/index.php [ flatassembler.net ] [ flatassembler.net ]</tokentext>
<sentencetext>Here's an awesome assembler that do wonders --- Flat Assembler.Download from: http://flatassembler.net/download.php [flatassembler.net] [flatassembler.net]Forum: http://board.flatassembler.net/index.php [flatassembler.net] [flatassembler.net]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773272</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773440</id>
	<title>Re:Code in high-level</title>
	<author>Anonymous</author>
	<datestamp>1263475920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I think you can legally get MASM  (Microsoft Macro Assembler) somewhere on the internet for free. A good place to start would be Microsoft. Then you can do what real coders do, and teach yourself!</p><p>And to think I paid several hundred dollars for that, back in the day.</p></htmltext>
<tokenext>I think you can legally get MASM ( Microsoft Macro Assembler ) somewhere on the internet for free .
A good place to start would be Microsoft .
Then you can do what real coders do , and teach yourself ! And to think I paid several hundred dollars for that , back in the day .</tokentext>
<sentencetext>I think you can legally get MASM  (Microsoft Macro Assembler) somewhere on the internet for free.
A good place to start would be Microsoft.
Then you can do what real coders do, and teach yourself!And to think I paid several hundred dollars for that, back in the day.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772784</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773946</id>
	<title>Kung-Fu and Ninjitsu...They're not dead!</title>
	<author>geekmux</author>
	<datestamp>1263478920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>This just in...Apparently Bruce Lee and Lee Van Cleef are alive and well and working for Intel, which likely accounts for all the <i>"crazy kung-fu and ninjitsu"</i> going on there...</p></htmltext>
<tokenext>This just in...Apparently Bruce Lee and Lee Van Cleef are alive and well and working for Intel , which likely accounts for all the " crazy kung-fu and ninjitsu " going on there.. .</tokentext>
<sentencetext>This just in...Apparently Bruce Lee and Lee Van Cleef are alive and well and working for Intel, which likely accounts for all the "crazy kung-fu and ninjitsu" going on there...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30827362</id>
	<title>how profiling tools fit in</title>
	<author>aap</author>
	<datestamp>1263912180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>or you needlessly wrote some hideous O(n!) search which is NP complete, then no amount of profiling and instruction tuning is ever going to help you.</p> </div><p>In this situation the value of the profiling tools is not for instruction tuning, but to help you notice the existence of the bad search function so you can replace it with something else.</p><p>In a large program there can be lurking n-squaredness which may not be obvious from looking at any one section of the code. For example there could be an innocent function which loops over n objects, and you may not realize that it is being called from a function twelve levels up the stack which is also looping over the same n objects.</p><p>Sometimes it's enough to just stop in the debugger a few times to realize what is slower than it should be and why. In other cases, browsing the output of a good call graph profiler can help inspire the fix faster.</p></div>
	</htmltext>
<tokenext>or you needlessly wrote some hideous O ( n !
) search which is NP complete , then no amount of profiling and instruction tuning is ever going to help you .
In this situation the value of the profiling tools is not for instruction tuning , but to help you notice the existence of the bad search function so you can replace it with something else.In a large program there can be lurking n-squaredness which may not be obvious from looking at any one section of the code .
For example there could be an innocent function which loops over n objects , and you may not realize that it is being called from a function twelve levels up the stack which is also looping over the same n objects.Sometimes it 's enough to just stop in the debugger a few times to realize what is slower than it should be and why .
In other cases , browsing the output of a good call graph profiler can help inspire the fix faster .</tokentext>
<sentencetext>or you needlessly wrote some hideous O(n!
) search which is NP complete, then no amount of profiling and instruction tuning is ever going to help you.
In this situation the value of the profiling tools is not for instruction tuning, but to help you notice the existence of the bad search function so you can replace it with something else.In a large program there can be lurking n-squaredness which may not be obvious from looking at any one section of the code.
For example there could be an innocent function which loops over n objects, and you may not realize that it is being called from a function twelve levels up the stack which is also looping over the same n objects.Sometimes it's enough to just stop in the debugger a few times to realize what is slower than it should be and why.
In other cases, browsing the output of a good call graph profiler can help inspire the fix faster.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30777024</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773208</id>
	<title>Re:Well that is /.'d</title>
	<author>Anonymous</author>
	<datestamp>1263474600000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>For me the link is borked.  Coral cache anyone?  The link is dead.  Its dead Jim!  We need Miracle Max.  Its not completely dead, its only mostly dead.<nobr> <wbr></nobr>.....Oh wait!  I'm mixing my star trek and princess bride metaphors.  My bad.</p></htmltext>
<tokenext>For me the link is borked .
Coral cache anyone ?
The link is dead .
Its dead Jim !
We need Miracle Max .
Its not completely dead , its only mostly dead .
.....Oh wait !
I 'm mixing my star trek and princess bride metaphors .
My bad .</tokentext>
<sentencetext>For me the link is borked.
Coral cache anyone?
The link is dead.
Its dead Jim!
We need Miracle Max.
Its not completely dead, its only mostly dead.
.....Oh wait!
I'm mixing my star trek and princess bride metaphors.
My bad.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772780</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772618</id>
	<title>Fast forward...</title>
	<author>LostCluster</author>
	<datestamp>1263471600000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>I can't say I've WTFV like I usually RTFA before you get to see it... but I can tell you this: The first four minutes of the video are spent asking which topic the room wants to see. No need to watch that part. Then it gets more interesting.</p></htmltext>
<tokenext>I ca n't say I 've WTFV like I usually RTFA before you get to see it... but I can tell you this : The first four minutes of the video are spent asking which topic the room wants to see .
No need to watch that part .
Then it gets more interesting .</tokentext>
<sentencetext>I can't say I've WTFV like I usually RTFA before you get to see it... but I can tell you this: The first four minutes of the video are spent asking which topic the room wants to see.
No need to watch that part.
Then it gets more interesting.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775164</id>
	<title>Virtualization</title>
	<author>Anonymous</author>
	<datestamp>1263488880000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Nobody cares about performance, or what exactly the code does, since java takes care of all those "pesky details".</p><p>Virtual machines use a very simple instruction set, hopefully optimized for the processor , hopefully optimized by the OS, which is hopefully optimized for the processor.</p><p>Want performance? Don't code in java.</p><p>Want performance? Do some PERFORMANCE ANALYSIS.</p><p>I know it's hard, since it requires actual MATH, something that simple programmers are not taught, and really don't care about, but it's worth it.</p><p>Some up front optimization can save you MONTHS of recoding effort.</p><p>Rule #1 in Performance Optimization: DON'T OPTIMIZE, PLAN!</p></htmltext>
<tokenext>Nobody cares about performance , or what exactly the code does , since java takes care of all those " pesky details " .Virtual machines use a very simple instruction set , hopefully optimized for the processor , hopefully optimized by the OS , which is hopefully optimized for the processor.Want performance ?
Do n't code in java.Want performance ?
Do some PERFORMANCE ANALYSIS.I know it 's hard , since it requires actual MATH , something that simple programmers are not taught , and really do n't care about , but it 's worth it.Some up front optimization can save you MONTHS of recoding effort.Rule # 1 in Performance Optimization : DO N'T OPTIMIZE , PLAN !</tokentext>
<sentencetext>Nobody cares about performance, or what exactly the code does, since java takes care of all those "pesky details".Virtual machines use a very simple instruction set, hopefully optimized for the processor , hopefully optimized by the OS, which is hopefully optimized for the processor.Want performance?
Don't code in java.Want performance?
Do some PERFORMANCE ANALYSIS.I know it's hard, since it requires actual MATH, something that simple programmers are not taught, and really don't care about, but it's worth it.Some up front optimization can save you MONTHS of recoding effort.Rule #1 in Performance Optimization: DON'T OPTIMIZE, PLAN!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30776222</id>
	<title>Re:Code in high-level</title>
	<author>vtcodger</author>
	<datestamp>1263589020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>There are, or used to be, a number of free assemblers that could generate X86 code.  Maybe not all the instructions, but more than enough for "Hello World" and other simple exercises.  The problem is -- as others have mentioned -- that the x86 instruction set has all the beauty and elegance of a third world slum.</p><p>As an alternative for learning, I'd suggest using an emulator and programming for some sane instruction set.  Maybe the MC6809 which had a nice, clean, easily comprehensible instruction set.  (I'm sure that there are other equally good choices).  If one still has an interest in assembly language programming after that, then by all means tackle x86.  You'll probably be appalled.</p></htmltext>
<tokenext>There are , or used to be , a number of free assemblers that could generate X86 code .
Maybe not all the instructions , but more than enough for " Hello World " and other simple exercises .
The problem is -- as others have mentioned -- that the x86 instruction set has all the beauty and elegance of a third world slum.As an alternative for learning , I 'd suggest using an emulator and programming for some sane instruction set .
Maybe the MC6809 which had a nice , clean , easily comprehensible instruction set .
( I 'm sure that there are other equally good choices ) .
If one still has an interest in assembly language programming after that , then by all means tackle x86 .
You 'll probably be appalled .</tokentext>
<sentencetext>There are, or used to be, a number of free assemblers that could generate X86 code.
Maybe not all the instructions, but more than enough for "Hello World" and other simple exercises.
The problem is -- as others have mentioned -- that the x86 instruction set has all the beauty and elegance of a third world slum.As an alternative for learning, I'd suggest using an emulator and programming for some sane instruction set.
Maybe the MC6809 which had a nice, clean, easily comprehensible instruction set.
(I'm sure that there are other equally good choices).
If one still has an interest in assembly language programming after that, then by all means tackle x86.
You'll probably be appalled.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773440</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30792136</id>
	<title>Re:rule of the code</title>
	<author>Jeppe Salvesen</author>
	<datestamp>1263671580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Write good, clean, well-designed and <em>efficient</em> code. The compiler will not make your mergesort into a quicksort.</p></htmltext>
<tokenext>Write good , clean , well-designed and efficient code .
The compiler will not make your mergesort into a quicksort .</tokentext>
<sentencetext>Write good, clean, well-designed and efficient code.
The compiler will not make your mergesort into a quicksort.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774060</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774820</id>
	<title>Re:I hate flash video</title>
	<author>Korin43</author>
	<datestamp>1263485760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>You do realize that there's a native 64 bit version of flash for Linux now right?</htmltext>
<tokenext>You do realize that there 's a native 64 bit version of flash for Linux now right ?</tokentext>
<sentencetext>You do realize that there's a native 64 bit version of flash for Linux now right?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773556</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775130</id>
	<title>Another Imbecile incompetently running video</title>
	<author>rfc1394</author>
	<datestamp>1263488580000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>0</modscore>
	<htmltext><p>I am guessing that the site was slashdotted because the video never ran.  Yet another example of some imbecile who designs their own video player and either can't run the material correctly or can't handle the load.  I see this over and over, someone - or some site - decides to run their own video player and it's either inoperative or runs badly.  I wrote about this <a href="http://www.paul-robinson.us/index.php?title=some\_people\_still\_don\_t\_get\_it" title="paul-robinson.us">on my blog</a> [paul-robinson.us] in October 2008 how so many places try - and fail - to properly run video.</p><p>You know, running video correctly isn't rocket science, YouTube does it fine under loads that would slashdot Slashdot.  But do these stupidos use YouTube to serve their video?  Noooo, they'd prefer to use some incompetent who can't provide it properly, probably because they're under the impression they'd lose ad revenue or something, I guess.  But I see this all the time.  The New York Times provides video for some of their stories,  But their video doesn't work, and stalls, but has no way to cache the video so that if it fails you can either get it to run smoothly or go back and run it again without having to download the entire video all over again after it's already been served.  I guess they never thought about people having problems,</p><p>If these were streamed video like a live event, that would be one thing.  But they do the exact same thing YouTube does, they feed stored video to a player written using Adobe Flash.  So there's no excuse for their failures except pure incompetence and/or stupidity.</p></htmltext>
<tokenext>I am guessing that the site was slashdotted because the video never ran .
Yet another example of some imbecile who designs their own video player and either ca n't run the material correctly or ca n't handle the load .
I see this over and over , someone - or some site - decides to run their own video player and it 's either inoperative or runs badly .
I wrote about this on my blog [ paul-robinson.us ] in October 2008 how so many places try - and fail - to properly run video.You know , running video correctly is n't rocket science , YouTube does it fine under loads that would slashdot Slashdot .
But do these stupidos use YouTube to serve their video ?
Noooo , they 'd prefer to use some incompetent who ca n't provide it properly , probably because they 're under the impression they 'd lose ad revenue or something , I guess .
But I see this all the time .
The New York Times provides video for some of their stories , But their video does n't work , and stalls , but has no way to cache the video so that if it fails you can either get it to run smoothly or go back and run it again without having to download the entire video all over again after it 's already been served .
I guess they never thought about people having problems,If these were streamed video like a live event , that would be one thing .
But they do the exact same thing YouTube does , they feed stored video to a player written using Adobe Flash .
So there 's no excuse for their failures except pure incompetence and/or stupidity .</tokentext>
<sentencetext>I am guessing that the site was slashdotted because the video never ran.
Yet another example of some imbecile who designs their own video player and either can't run the material correctly or can't handle the load.
I see this over and over, someone - or some site - decides to run their own video player and it's either inoperative or runs badly.
I wrote about this on my blog [paul-robinson.us] in October 2008 how so many places try - and fail - to properly run video.You know, running video correctly isn't rocket science, YouTube does it fine under loads that would slashdot Slashdot.
But do these stupidos use YouTube to serve their video?
Noooo, they'd prefer to use some incompetent who can't provide it properly, probably because they're under the impression they'd lose ad revenue or something, I guess.
But I see this all the time.
The New York Times provides video for some of their stories,  But their video doesn't work, and stalls, but has no way to cache the video so that if it fails you can either get it to run smoothly or go back and run it again without having to download the entire video all over again after it's already been served.
I guess they never thought about people having problems,If these were streamed video like a live event, that would be one thing.
But they do the exact same thing YouTube does, they feed stored video to a player written using Adobe Flash.
So there's no excuse for their failures except pure incompetence and/or stupidity.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775056</id>
	<title>Re:Lots of low Slashdot IDs commenting on this...</title>
	<author>chill</author>
	<datestamp>1263487680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Who you talking to, noob?</p></htmltext>
<tokenext>Who you talking to , noob ?</tokentext>
<sentencetext>Who you talking to, noob?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774808</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773470</id>
	<title>Re:Code in high-level</title>
	<author>smash</author>
	<datestamp>1263476040000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>Not quite.
<p>
But, its certainly better to code in a high level language first, test, tweak the algorithm as much as you can, PROFILE and THEN start breaking out your assembler.
No point optimising 99\% of your code in super fast asm if it only spends 1\% of the cpu time in it.  Even if you make all that code 10x as fast, you've only saved 0.9\% cpu time.<nobr> <wbr></nobr>:)</p></htmltext>
<tokenext>Not quite .
But , its certainly better to code in a high level language first , test , tweak the algorithm as much as you can , PROFILE and THEN start breaking out your assembler .
No point optimising 99 \ % of your code in super fast asm if it only spends 1 \ % of the cpu time in it .
Even if you make all that code 10x as fast , you 've only saved 0.9 \ % cpu time .
: )</tokentext>
<sentencetext>Not quite.
But, its certainly better to code in a high level language first, test, tweak the algorithm as much as you can, PROFILE and THEN start breaking out your assembler.
No point optimising 99\% of your code in super fast asm if it only spends 1\% of the cpu time in it.
Even if you make all that code 10x as fast, you've only saved 0.9\% cpu time.
:)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30776686</id>
	<title>Re:Lots of low Slashdot IDs commenting on this...</title>
	<author>daveb1</author>
	<datestamp>1263551880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>well are you compensating for something then ?<nobr> <wbr></nobr>:P</htmltext>
<tokenext>well are you compensating for something then ?
: P</tokentext>
<sentencetext>well are you compensating for something then ?
:P</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774808</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773542</id>
	<title>It's not just x86</title>
	<author>RzUpAnmsCwrds</author>
	<datestamp>1263476460000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>Features like out of order execution, caches, and branch prediction/speculation are commonplace on many architectures, including the next generation ARM Cortex A9 and many POWER, SPARC, and other RISC architectures. Even in-order designs like Atom, Coretex A8, or POWER6 have branch prediction and multi-level caches.</p><p>The most important thing for performance is to understand the memory hierarchy. Out-of-order execution lets you get away with a lot of stupid things, since many of the pipeline stalls you would otherwise create can be re-ordered around. In contrast, the memory subsystem can do relatively little for you if your working set is too large and you don't access memory in an efficient pattern.</p></htmltext>
<tokenext>Features like out of order execution , caches , and branch prediction/speculation are commonplace on many architectures , including the next generation ARM Cortex A9 and many POWER , SPARC , and other RISC architectures .
Even in-order designs like Atom , Coretex A8 , or POWER6 have branch prediction and multi-level caches.The most important thing for performance is to understand the memory hierarchy .
Out-of-order execution lets you get away with a lot of stupid things , since many of the pipeline stalls you would otherwise create can be re-ordered around .
In contrast , the memory subsystem can do relatively little for you if your working set is too large and you do n't access memory in an efficient pattern .</tokentext>
<sentencetext>Features like out of order execution, caches, and branch prediction/speculation are commonplace on many architectures, including the next generation ARM Cortex A9 and many POWER, SPARC, and other RISC architectures.
Even in-order designs like Atom, Coretex A8, or POWER6 have branch prediction and multi-level caches.The most important thing for performance is to understand the memory hierarchy.
Out-of-order execution lets you get away with a lot of stupid things, since many of the pipeline stalls you would otherwise create can be re-ordered around.
In contrast, the memory subsystem can do relatively little for you if your working set is too large and you don't access memory in an efficient pattern.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774950</id>
	<title>Re:Lots of low Slashdot IDs commenting on this...</title>
	<author>iammani</author>
	<datestamp>1263486840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Meh, mine is a 0-digit UID. I win, game over!</htmltext>
<tokenext>Meh , mine is a 0-digit UID .
I win , game over !</tokentext>
<sentencetext>Meh, mine is a 0-digit UID.
I win, game over!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774808</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30777336</id>
	<title>Re:Fast forward...</title>
	<author>drseuk</author>
	<datestamp>1263559740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Or kill Flash Gordon to get rid of both Adobe and Intel simultaneously<nobr> <wbr></nobr>...</htmltext>
<tokenext>Or kill Flash Gordon to get rid of both Adobe and Intel simultaneously .. .</tokentext>
<sentencetext>Or kill Flash Gordon to get rid of both Adobe and Intel simultaneously ...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772944</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773556</id>
	<title>I hate flash video</title>
	<author>Anonymous</author>
	<datestamp>1263476520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I wish they'd all just use HTML5 or put it on YouTube so I can use youtube-dl or something.  Otherwise it either doesn't work at all (my amd64 Linux boxes) or is slow and jerky (my Mac OSX box).  It's really frustrating.</p></htmltext>
<tokenext>I wish they 'd all just use HTML5 or put it on YouTube so I can use youtube-dl or something .
Otherwise it either does n't work at all ( my amd64 Linux boxes ) or is slow and jerky ( my Mac OSX box ) .
It 's really frustrating .</tokentext>
<sentencetext>I wish they'd all just use HTML5 or put it on YouTube so I can use youtube-dl or something.
Otherwise it either doesn't work at all (my amd64 Linux boxes) or is slow and jerky (my Mac OSX box).
It's really frustrating.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30776624</id>
	<title>Re:rule of the code</title>
	<author>SorcererX</author>
	<datestamp>1263550980000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext>You got the fastest time simply by playing with the compiler flags?
We had a similar problem where we had to do a matrix multiplication on symmetric matrices for C = AB^T+BA^T (rank2k update with alpha=1.0, beta=0.0) and there was nothing the compiler could do for us to get even remotely near good scores.
Doing the simplest implementation we got about 5 FLOPS/cycle on an 8 core system, optimizing just with SSE etc, I got it up to about 13 FLOPS/cycle, and by splitting up the matrix in tiny parts to avoid cache trashing etc I was able to get it up to 47 FLOPS/cycle. For comparison Intel's MKL library managed about 85 FLOPS/cycle on the same hardware. I believe the best in my class was about 50 FLOPS/cycle, and it took an insane amount of fiddling for any of us to get above 25-30 FLOPS/cycle or so.

That said, most things done on a computer is rarely that limited by memory access, and then the compiler does an awesome job<nobr> <wbr></nobr>:)</htmltext>
<tokenext>You got the fastest time simply by playing with the compiler flags ?
We had a similar problem where we had to do a matrix multiplication on symmetric matrices for C = AB ^ T + BA ^ T ( rank2k update with alpha = 1.0 , beta = 0.0 ) and there was nothing the compiler could do for us to get even remotely near good scores .
Doing the simplest implementation we got about 5 FLOPS/cycle on an 8 core system , optimizing just with SSE etc , I got it up to about 13 FLOPS/cycle , and by splitting up the matrix in tiny parts to avoid cache trashing etc I was able to get it up to 47 FLOPS/cycle .
For comparison Intel 's MKL library managed about 85 FLOPS/cycle on the same hardware .
I believe the best in my class was about 50 FLOPS/cycle , and it took an insane amount of fiddling for any of us to get above 25-30 FLOPS/cycle or so .
That said , most things done on a computer is rarely that limited by memory access , and then the compiler does an awesome job : )</tokentext>
<sentencetext>You got the fastest time simply by playing with the compiler flags?
We had a similar problem where we had to do a matrix multiplication on symmetric matrices for C = AB^T+BA^T (rank2k update with alpha=1.0, beta=0.0) and there was nothing the compiler could do for us to get even remotely near good scores.
Doing the simplest implementation we got about 5 FLOPS/cycle on an 8 core system, optimizing just with SSE etc, I got it up to about 13 FLOPS/cycle, and by splitting up the matrix in tiny parts to avoid cache trashing etc I was able to get it up to 47 FLOPS/cycle.
For comparison Intel's MKL library managed about 85 FLOPS/cycle on the same hardware.
I believe the best in my class was about 50 FLOPS/cycle, and it took an insane amount of fiddling for any of us to get above 25-30 FLOPS/cycle or so.
That said, most things done on a computer is rarely that limited by memory access, and then the compiler does an awesome job :)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774060</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772638</id>
	<title>Could someone give me a crash course</title>
	<author>Anonymous</author>
	<datestamp>1263471720000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext><p>on the website?  I'm not sure what I'm looking at...</p></htmltext>
<tokenext>on the website ?
I 'm not sure what I 'm looking at.. .</tokentext>
<sentencetext>on the website?
I'm not sure what I'm looking at...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30780062</id>
	<title>Re:...except for the uControllers I use.</title>
	<author>TheRaven64</author>
	<datestamp>1263577080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Even that niche probably won't be around for much longer.  You can get 32-bit ARM and SPARC processors for not much more than you're paying.  Even a high-end ARM SoC costs around $40, the low end ones are well under $10.  When you can get a 50MHz 32-bit ARM core for 50, there won't be much incentive to keep using the 16-bit chips.  Out of curiosity, I just had a look at what kind of price these chips fetch now.  You can get an LPC2101 for around $1.50.  This is a 32-bit ARM7 core (supporting 16-bit Thumb code if code density is important to you) running at 70MHz.  Less RAM than your microcontrollers, but it has controllers for plugging in external memory.  $5 gets you 128KB of flash and 32KB of RAM, so the prices already aren't far off what you're paying for a core that is much slower and harder to program.</htmltext>
<tokenext>Even that niche probably wo n't be around for much longer .
You can get 32-bit ARM and SPARC processors for not much more than you 're paying .
Even a high-end ARM SoC costs around $ 40 , the low end ones are well under $ 10 .
When you can get a 50MHz 32-bit ARM core for 50 , there wo n't be much incentive to keep using the 16-bit chips .
Out of curiosity , I just had a look at what kind of price these chips fetch now .
You can get an LPC2101 for around $ 1.50 .
This is a 32-bit ARM7 core ( supporting 16-bit Thumb code if code density is important to you ) running at 70MHz .
Less RAM than your microcontrollers , but it has controllers for plugging in external memory .
$ 5 gets you 128KB of flash and 32KB of RAM , so the prices already are n't far off what you 're paying for a core that is much slower and harder to program .</tokentext>
<sentencetext>Even that niche probably won't be around for much longer.
You can get 32-bit ARM and SPARC processors for not much more than you're paying.
Even a high-end ARM SoC costs around $40, the low end ones are well under $10.
When you can get a 50MHz 32-bit ARM core for 50, there won't be much incentive to keep using the 16-bit chips.
Out of curiosity, I just had a look at what kind of price these chips fetch now.
You can get an LPC2101 for around $1.50.
This is a 32-bit ARM7 core (supporting 16-bit Thumb code if code density is important to you) running at 70MHz.
Less RAM than your microcontrollers, but it has controllers for plugging in external memory.
$5 gets you 128KB of flash and 32KB of RAM, so the prices already aren't far off what you're paying for a core that is much slower and harder to program.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775570</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772946</id>
	<title>Re:Premature optimization is evil... and stupid</title>
	<author>Monkeedude1212</author>
	<datestamp>1263473280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p> If (and only if!)</p> </div><p>Compiler Error: Numerous Syntax Errors.<br>Line 1, 4; Object Expected<br>Line 1, 15; '(' Expected<br>Line 1, 16; Condition Expected<br>Line 1, 17; 'Then' Expected</p></div>
	</htmltext>
<tokenext>If ( and only if !
) Compiler Error : Numerous Syntax Errors.Line 1 , 4 ; Object ExpectedLine 1 , 15 ; ' ( ' ExpectedLine 1 , 16 ; Condition ExpectedLine 1 , 17 ; 'Then ' Expected</tokentext>
<sentencetext> If (and only if!
) Compiler Error: Numerous Syntax Errors.Line 1, 4; Object ExpectedLine 1, 15; '(' ExpectedLine 1, 16; Condition ExpectedLine 1, 17; 'Then' Expected
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772742</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772736</id>
	<title>Re:Code in high-level</title>
	<author>Anonymous</author>
	<datestamp>1263472200000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Sometimes it's just plain FUN FUN FUN to code in asm. You're right that most programmers will never have a need for it at all (with some exceptions, such as those messing with operating systems or embedded systems), although knowing some ASM can help a lot with debugging. I suppose one could (read: should) learn a little ASM to have a better idea of what the hardware is doing, this will allow you to optimize your code a little, or (more importantly) write it in such a way that makes it easier for the compiler to optimize.</p></htmltext>
<tokenext>Sometimes it 's just plain FUN FUN FUN to code in asm .
You 're right that most programmers will never have a need for it at all ( with some exceptions , such as those messing with operating systems or embedded systems ) , although knowing some ASM can help a lot with debugging .
I suppose one could ( read : should ) learn a little ASM to have a better idea of what the hardware is doing , this will allow you to optimize your code a little , or ( more importantly ) write it in such a way that makes it easier for the compiler to optimize .</tokentext>
<sentencetext>Sometimes it's just plain FUN FUN FUN to code in asm.
You're right that most programmers will never have a need for it at all (with some exceptions, such as those messing with operating systems or embedded systems), although knowing some ASM can help a lot with debugging.
I suppose one could (read: should) learn a little ASM to have a better idea of what the hardware is doing, this will allow you to optimize your code a little, or (more importantly) write it in such a way that makes it easier for the compiler to optimize.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30777024</id>
	<title>Re:Premature optimization is evil... and stupid</title>
	<author>igb</author>
	<datestamp>1263556500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Which is all true, but I think misses one important point: you do need to consider the complexity order of your algorithms.  I've seen several applications which do ludicrous O(n^2) or worse operations, and by the time they get to see a dataset large enough to provoke serious problems no-one dares touch the basic methods, so all that's left is local code optimisation or throwing more hardware at the problem.  And if the problem is that you used an O(n^2) sort rather than an O(n.log(n)) sort, or you computed a cross-product of two large tables O(n^2) at least for both space and time), or you needlessly wrote some hideous O(n!) search which is NP complete, then no amount of profiling and instruction tuning is ever going to help you.   I think that at the outset of a design process, it's important to consider the complexity order of any algorithm used which has the potential to process large amounts of data, as otherwise you can see some startlingly bad performance problems as you scale.</htmltext>
<tokenext>Which is all true , but I think misses one important point : you do need to consider the complexity order of your algorithms .
I 've seen several applications which do ludicrous O ( n ^ 2 ) or worse operations , and by the time they get to see a dataset large enough to provoke serious problems no-one dares touch the basic methods , so all that 's left is local code optimisation or throwing more hardware at the problem .
And if the problem is that you used an O ( n ^ 2 ) sort rather than an O ( n.log ( n ) ) sort , or you computed a cross-product of two large tables O ( n ^ 2 ) at least for both space and time ) , or you needlessly wrote some hideous O ( n !
) search which is NP complete , then no amount of profiling and instruction tuning is ever going to help you .
I think that at the outset of a design process , it 's important to consider the complexity order of any algorithm used which has the potential to process large amounts of data , as otherwise you can see some startlingly bad performance problems as you scale .</tokentext>
<sentencetext>Which is all true, but I think misses one important point: you do need to consider the complexity order of your algorithms.
I've seen several applications which do ludicrous O(n^2) or worse operations, and by the time they get to see a dataset large enough to provoke serious problems no-one dares touch the basic methods, so all that's left is local code optimisation or throwing more hardware at the problem.
And if the problem is that you used an O(n^2) sort rather than an O(n.log(n)) sort, or you computed a cross-product of two large tables O(n^2) at least for both space and time), or you needlessly wrote some hideous O(n!
) search which is NP complete, then no amount of profiling and instruction tuning is ever going to help you.
I think that at the outset of a design process, it's important to consider the complexity order of any algorithm used which has the potential to process large amounts of data, as otherwise you can see some startlingly bad performance problems as you scale.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30776028</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773410</id>
	<title>Steve Jobs shits</title>
	<author>Anonymous</author>
	<datestamp>1263475740000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext>What do you Apple dick smokers think of your god now?</htmltext>
<tokenext>What do you Apple dick smokers think of your god now ?</tokentext>
<sentencetext>What do you Apple dick smokers think of your god now?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772780</id>
	<title>Well that is /.'d</title>
	<author>Com2Kid</author>
	<datestamp>1263472380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>/.'d, to say the least.  Wow.</p><p>Great lecture so far, 2 minute pauses every 20 seconds make it kind of hard to listen to though!</p></htmltext>
<tokenext>/ .
'd , to say the least .
Wow.Great lecture so far , 2 minute pauses every 20 seconds make it kind of hard to listen to though !</tokentext>
<sentencetext>/.
'd, to say the least.
Wow.Great lecture so far, 2 minute pauses every 20 seconds make it kind of hard to listen to though!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30777476</id>
	<title>Re:Premature optimization is evil... and stupid</title>
	<author>epine</author>
	<datestamp>1263561300000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><blockquote><div><p>That's the main reason why I want to shoot people who write "clever" code on the first pass.</p></div></blockquote><p>Over the years, I've grown to hate this meme.  Not because it isn't right, but because it stops ten floors below the penthouse of human potential.</p><p>First of all, it's an incredible instance of cultural drift.  In the mid 1980s, when this meme was halfway current, I worked on adding support for Asian characters to an Asian-made PC.  On the "make it right" pass it took 15s to update the screen after pressing the page down key, and this from assembly language.  Slower than YouTube over 300 baud.  It was doing a lot of pixel swizzling it shouldn't have been, because the fonts were supplied in a format better suited to printing.  This was an order of magnitude below an invitation to whiffle-ball training camp.  This was Lance Armstrong during his chemotherapy years towing a baby trailer.  Today you get 60fps with a 100 thousand or a 100 million polygons, I've sort of lost track.</p><p>Let's not shunt performance onto the side track of irrelevancy.  While there's no good excuse, ever, for writing faulty code, an enlightened balance between starting out with an approach you can live with, and exploiting necessary cleverness *within your ability* goes a long way.</p><p>How about we update Knuth's arthritic maxim?  <b>Don't tweak what you don't grok.</b>  If you grok, use your judgement.  Exploit your human potential.  Live a little.</p><p>The books I've been reading lately about the evolution of skills in the work place suggest that painstaking reductive work processes are on their way to India.  Job security in home world is greatly enhanced if you can navigate multiple agendas in tandem, exploiting more of that judgement thing.</p><p>One of the reasons Carmack became so successful is that he didn't waste his effort looking for excuses to deprive his co-workers of their oxygen bits.  Instead he conducted shrewd excursions along the edge of the envelope in pursuit of the sweet spot between cleverness too oppressive to live with, and no performance at all.</p><p>In my day of deprecating my elders, I always knew where the pea was hidden under the mattress.  These days, there are so many squishy mattresses stacked one upon the other, I have to plan my work day with a step ladder.  Which I think is what this unwatchable cult-encoded video is on about: the ankle level view most of us never see any more.</p><p>Here's another thing.  <b>I've you're going to be clever about how you code something, also be clever about how you do it.</b>  In other words, be equally clever all levels of the solution process simultaneously: algorithm selection, implementation, commenting, software engineering, documentation, and unit test.  Knuth got away with TeX, barely, for precisely this reason.  Because of his cleverness, the extension to handle Asian languages was far from elegant.  Because of his cleverness (in making everything else run extremely well), people actually wanted to extend TeX to handle Asian languages.  So who's to say he was wrong?  Despite his cleverness, he managed to keep his booboo score in single or low double digits.  His bug tracking database fit nicely on an index card.</p><p>In the modern era, people quote the old "make it right before you make it faster" as the cure for the halitosis of ineptitude: you're feeble and irritating, so practice your social graces.  Don't make me come over there and choke off your oxygen bit.  It's a long ways from saying "you have a lot of human potential, and not much experience, so let me help you confront the challenges in a meaningful way".  These sayings leak a lot of sentiment about social engagement.</p><p>Every so often I have to pull up a chair beside a junior resource and go "Dude, you're jousting at windmills here, let's roll that change back and try again.  I know you can do better."  Five minutes of war stories about how to shoot yourself in the foot six ways from Sunday is usually enough to rebalance the flywheel of self preservation.</p><p>One thing I will add is that I've always worked in small teams, and rarely experienced the case where a team member adds a lot of cleverness to the code, then bakes off to another project, before the cleverness comes back to haunt the next release cycle.  Neither did Carmack, neither did Knuth.</p><p>There's also an availability bias at work here.  I've seen many cases where something very clever was engineered into the plumbing that worked so well, no one lifted the hatch on that corner of the code base for years at a time.  So again, the problem is not cleverness, but badly judged cleverness.</p><p>Or badly judged laziness.  I have a tenant downstairs who forgets to clean up after his dog.  My bike hangs in a shared laundry area, adjoining his suite.  Sooner or later, he'll figure it out.</p></div>
	</htmltext>
<tokenext>That 's the main reason why I want to shoot people who write " clever " code on the first pass.Over the years , I 've grown to hate this meme .
Not because it is n't right , but because it stops ten floors below the penthouse of human potential.First of all , it 's an incredible instance of cultural drift .
In the mid 1980s , when this meme was halfway current , I worked on adding support for Asian characters to an Asian-made PC .
On the " make it right " pass it took 15s to update the screen after pressing the page down key , and this from assembly language .
Slower than YouTube over 300 baud .
It was doing a lot of pixel swizzling it should n't have been , because the fonts were supplied in a format better suited to printing .
This was an order of magnitude below an invitation to whiffle-ball training camp .
This was Lance Armstrong during his chemotherapy years towing a baby trailer .
Today you get 60fps with a 100 thousand or a 100 million polygons , I 've sort of lost track.Let 's not shunt performance onto the side track of irrelevancy .
While there 's no good excuse , ever , for writing faulty code , an enlightened balance between starting out with an approach you can live with , and exploiting necessary cleverness * within your ability * goes a long way.How about we update Knuth 's arthritic maxim ?
Do n't tweak what you do n't grok .
If you grok , use your judgement .
Exploit your human potential .
Live a little.The books I 've been reading lately about the evolution of skills in the work place suggest that painstaking reductive work processes are on their way to India .
Job security in home world is greatly enhanced if you can navigate multiple agendas in tandem , exploiting more of that judgement thing.One of the reasons Carmack became so successful is that he did n't waste his effort looking for excuses to deprive his co-workers of their oxygen bits .
Instead he conducted shrewd excursions along the edge of the envelope in pursuit of the sweet spot between cleverness too oppressive to live with , and no performance at all.In my day of deprecating my elders , I always knew where the pea was hidden under the mattress .
These days , there are so many squishy mattresses stacked one upon the other , I have to plan my work day with a step ladder .
Which I think is what this unwatchable cult-encoded video is on about : the ankle level view most of us never see any more.Here 's another thing .
I 've you 're going to be clever about how you code something , also be clever about how you do it .
In other words , be equally clever all levels of the solution process simultaneously : algorithm selection , implementation , commenting , software engineering , documentation , and unit test .
Knuth got away with TeX , barely , for precisely this reason .
Because of his cleverness , the extension to handle Asian languages was far from elegant .
Because of his cleverness ( in making everything else run extremely well ) , people actually wanted to extend TeX to handle Asian languages .
So who 's to say he was wrong ?
Despite his cleverness , he managed to keep his booboo score in single or low double digits .
His bug tracking database fit nicely on an index card.In the modern era , people quote the old " make it right before you make it faster " as the cure for the halitosis of ineptitude : you 're feeble and irritating , so practice your social graces .
Do n't make me come over there and choke off your oxygen bit .
It 's a long ways from saying " you have a lot of human potential , and not much experience , so let me help you confront the challenges in a meaningful way " .
These sayings leak a lot of sentiment about social engagement.Every so often I have to pull up a chair beside a junior resource and go " Dude , you 're jousting at windmills here , let 's roll that change back and try again .
I know you can do better .
" Five minutes of war stories about how to shoot yourself in the foot six ways from Sunday is usually enough to rebalance the flywheel of self preservation.One thing I will add is that I 've always worked in small teams , and rarely experienced the case where a team member adds a lot of cleverness to the code , then bakes off to another project , before the cleverness comes back to haunt the next release cycle .
Neither did Carmack , neither did Knuth.There 's also an availability bias at work here .
I 've seen many cases where something very clever was engineered into the plumbing that worked so well , no one lifted the hatch on that corner of the code base for years at a time .
So again , the problem is not cleverness , but badly judged cleverness.Or badly judged laziness .
I have a tenant downstairs who forgets to clean up after his dog .
My bike hangs in a shared laundry area , adjoining his suite .
Sooner or later , he 'll figure it out .</tokentext>
<sentencetext>That's the main reason why I want to shoot people who write "clever" code on the first pass.Over the years, I've grown to hate this meme.
Not because it isn't right, but because it stops ten floors below the penthouse of human potential.First of all, it's an incredible instance of cultural drift.
In the mid 1980s, when this meme was halfway current, I worked on adding support for Asian characters to an Asian-made PC.
On the "make it right" pass it took 15s to update the screen after pressing the page down key, and this from assembly language.
Slower than YouTube over 300 baud.
It was doing a lot of pixel swizzling it shouldn't have been, because the fonts were supplied in a format better suited to printing.
This was an order of magnitude below an invitation to whiffle-ball training camp.
This was Lance Armstrong during his chemotherapy years towing a baby trailer.
Today you get 60fps with a 100 thousand or a 100 million polygons, I've sort of lost track.Let's not shunt performance onto the side track of irrelevancy.
While there's no good excuse, ever, for writing faulty code, an enlightened balance between starting out with an approach you can live with, and exploiting necessary cleverness *within your ability* goes a long way.How about we update Knuth's arthritic maxim?
Don't tweak what you don't grok.
If you grok, use your judgement.
Exploit your human potential.
Live a little.The books I've been reading lately about the evolution of skills in the work place suggest that painstaking reductive work processes are on their way to India.
Job security in home world is greatly enhanced if you can navigate multiple agendas in tandem, exploiting more of that judgement thing.One of the reasons Carmack became so successful is that he didn't waste his effort looking for excuses to deprive his co-workers of their oxygen bits.
Instead he conducted shrewd excursions along the edge of the envelope in pursuit of the sweet spot between cleverness too oppressive to live with, and no performance at all.In my day of deprecating my elders, I always knew where the pea was hidden under the mattress.
These days, there are so many squishy mattresses stacked one upon the other, I have to plan my work day with a step ladder.
Which I think is what this unwatchable cult-encoded video is on about: the ankle level view most of us never see any more.Here's another thing.
I've you're going to be clever about how you code something, also be clever about how you do it.
In other words, be equally clever all levels of the solution process simultaneously: algorithm selection, implementation, commenting, software engineering, documentation, and unit test.
Knuth got away with TeX, barely, for precisely this reason.
Because of his cleverness, the extension to handle Asian languages was far from elegant.
Because of his cleverness (in making everything else run extremely well), people actually wanted to extend TeX to handle Asian languages.
So who's to say he was wrong?
Despite his cleverness, he managed to keep his booboo score in single or low double digits.
His bug tracking database fit nicely on an index card.In the modern era, people quote the old "make it right before you make it faster" as the cure for the halitosis of ineptitude: you're feeble and irritating, so practice your social graces.
Don't make me come over there and choke off your oxygen bit.
It's a long ways from saying "you have a lot of human potential, and not much experience, so let me help you confront the challenges in a meaningful way".
These sayings leak a lot of sentiment about social engagement.Every so often I have to pull up a chair beside a junior resource and go "Dude, you're jousting at windmills here, let's roll that change back and try again.
I know you can do better.
"  Five minutes of war stories about how to shoot yourself in the foot six ways from Sunday is usually enough to rebalance the flywheel of self preservation.One thing I will add is that I've always worked in small teams, and rarely experienced the case where a team member adds a lot of cleverness to the code, then bakes off to another project, before the cleverness comes back to haunt the next release cycle.
Neither did Carmack, neither did Knuth.There's also an availability bias at work here.
I've seen many cases where something very clever was engineered into the plumbing that worked so well, no one lifted the hatch on that corner of the code base for years at a time.
So again, the problem is not cleverness, but badly judged cleverness.Or badly judged laziness.
I have a tenant downstairs who forgets to clean up after his dog.
My bike hangs in a shared laundry area, adjoining his suite.
Sooner or later, he'll figure it out.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772742</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772870</id>
	<title>tl;dr</title>
	<author>Anonymous</author>
	<datestamp>1263472920000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p><div class="quote"><p>Cliff Click talks about why it's almost impossible to tell what an x86 chip is really doing to your code</p></div><p>tl;dr<br>Dont care... because I believe in abstraction!</p></div>
	</htmltext>
<tokenext>Cliff Click talks about why it 's almost impossible to tell what an x86 chip is really doing to your codetl ; drDont care... because I believe in abstraction !</tokentext>
<sentencetext>Cliff Click talks about why it's almost impossible to tell what an x86 chip is really doing to your codetl;drDont care... because I believe in abstraction!
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774632</id>
	<title>Re:Premature optimization is evil... and stupid</title>
	<author>Estanislao Martínez</author>
	<datestamp>1263484320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>I think that the premature optimization claims are way overdone. In the cases where performance does not matter, then sure, make the code as readable as possible and just accept the performance.  However, sometimes it is known from the beginning of a project that performance is critical and that achieving that performance will be a challenge. In such cases, I think that it makes sense to design for performance.</p></div></blockquote><p>Well, and then there's another approach, where you first write a fully-functional and readable implementation of the solution without regard to performance until you get it right, then rewrite the really critical parts from scratch to be a lot faster.

</p><p>I've been involved with projects that went like this.  Typically, the first stage is necessary because the task is very exploratory--e.g., write a fairly generic computation engine that processes user-defined formulas.  The first pass is slow, but it serves to prove that you're on the right track, and then you rewrite it to be fast (typically by changing it from an interpreter-like design to a compiler-like one).</p><blockquote><div><p>I work on software for processing medical device data and performance is often critical. You probably want an image display to update very quickly when it is providing feedback to the doctor guiding a catheter toward your heart, for example.</p></div></blockquote><p>Well, yeah, real-time is a requirement that must be built into the design.</p><blockquote><div><p>We had one project where the team decided to start over with a clean framework without concern for performance -- they would profile and optimize once everything was working. They followed the advice of many a software engineer: their framework was very nice, replete with design patterns and applications of generic programming, and entirely unscalable beyond a single processor core.</p></div></blockquote><p>Partly this is an indication of how all the faddish evangelism about "frameworks" and "design" is often nonsense, and in this case, damned by the stated goals.  Their claim that the framework was "generic" is contradicted by the fact that it doesn't scale beyond a single core.  Basically, "generic" is supposed to mean that as few assumptions as possible are built in, yet the framework slipped in a big one-core assumption.</p><blockquote><div><p>There were no performance tests done during development, and of course the timeline was such that there would only be minimal time for optimization once the functionality was complete.</p></div></blockquote><p>And that of course is as big of an error as any of the other ones.</p></div>
	</htmltext>
<tokenext>I think that the premature optimization claims are way overdone .
In the cases where performance does not matter , then sure , make the code as readable as possible and just accept the performance .
However , sometimes it is known from the beginning of a project that performance is critical and that achieving that performance will be a challenge .
In such cases , I think that it makes sense to design for performance.Well , and then there 's another approach , where you first write a fully-functional and readable implementation of the solution without regard to performance until you get it right , then rewrite the really critical parts from scratch to be a lot faster .
I 've been involved with projects that went like this .
Typically , the first stage is necessary because the task is very exploratory--e.g. , write a fairly generic computation engine that processes user-defined formulas .
The first pass is slow , but it serves to prove that you 're on the right track , and then you rewrite it to be fast ( typically by changing it from an interpreter-like design to a compiler-like one ) .I work on software for processing medical device data and performance is often critical .
You probably want an image display to update very quickly when it is providing feedback to the doctor guiding a catheter toward your heart , for example.Well , yeah , real-time is a requirement that must be built into the design.We had one project where the team decided to start over with a clean framework without concern for performance -- they would profile and optimize once everything was working .
They followed the advice of many a software engineer : their framework was very nice , replete with design patterns and applications of generic programming , and entirely unscalable beyond a single processor core.Partly this is an indication of how all the faddish evangelism about " frameworks " and " design " is often nonsense , and in this case , damned by the stated goals .
Their claim that the framework was " generic " is contradicted by the fact that it does n't scale beyond a single core .
Basically , " generic " is supposed to mean that as few assumptions as possible are built in , yet the framework slipped in a big one-core assumption.There were no performance tests done during development , and of course the timeline was such that there would only be minimal time for optimization once the functionality was complete.And that of course is as big of an error as any of the other ones .</tokentext>
<sentencetext>I think that the premature optimization claims are way overdone.
In the cases where performance does not matter, then sure, make the code as readable as possible and just accept the performance.
However, sometimes it is known from the beginning of a project that performance is critical and that achieving that performance will be a challenge.
In such cases, I think that it makes sense to design for performance.Well, and then there's another approach, where you first write a fully-functional and readable implementation of the solution without regard to performance until you get it right, then rewrite the really critical parts from scratch to be a lot faster.
I've been involved with projects that went like this.
Typically, the first stage is necessary because the task is very exploratory--e.g., write a fairly generic computation engine that processes user-defined formulas.
The first pass is slow, but it serves to prove that you're on the right track, and then you rewrite it to be fast (typically by changing it from an interpreter-like design to a compiler-like one).I work on software for processing medical device data and performance is often critical.
You probably want an image display to update very quickly when it is providing feedback to the doctor guiding a catheter toward your heart, for example.Well, yeah, real-time is a requirement that must be built into the design.We had one project where the team decided to start over with a clean framework without concern for performance -- they would profile and optimize once everything was working.
They followed the advice of many a software engineer: their framework was very nice, replete with design patterns and applications of generic programming, and entirely unscalable beyond a single processor core.Partly this is an indication of how all the faddish evangelism about "frameworks" and "design" is often nonsense, and in this case, damned by the stated goals.
Their claim that the framework was "generic" is contradicted by the fact that it doesn't scale beyond a single core.
Basically, "generic" is supposed to mean that as few assumptions as possible are built in, yet the framework slipped in a big one-core assumption.There were no performance tests done during development, and of course the timeline was such that there would only be minimal time for optimization once the functionality was complete.And that of course is as big of an error as any of the other ones.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774118</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772920</id>
	<title>Skynet...</title>
	<author>Anonymous</author>
	<datestamp>1263473160000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Now that no one knows what they're doing, who's to keep them from merging.  How long is it before several machines of x86 chips become self-aware?  The end is nigh comrades!</p><p>Alternatively, maybe they'll become Data.</p></htmltext>
<tokenext>Now that no one knows what they 're doing , who 's to keep them from merging .
How long is it before several machines of x86 chips become self-aware ?
The end is nigh comrades ! Alternatively , maybe they 'll become Data .</tokentext>
<sentencetext>Now that no one knows what they're doing, who's to keep them from merging.
How long is it before several machines of x86 chips become self-aware?
The end is nigh comrades!Alternatively, maybe they'll become Data.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773272</id>
	<title>Re:Code in high-level</title>
	<author>KC1P</author>
	<datestamp>1263475020000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>That's a real shame!  But my impression is that for a long time now, college-level assembly instruction has consisted almost entirely of indoctrinating the students to believe that assembly language programming is difficult and unpleasant and must be avoided at all costs.  Which couldn't be more wrong -- it's AWESOME!</p><p>Even on the x86 with all its flaws, being able to have that kind of control makes everything more fun.  The fact that your code runs like a bat out of hell (unless you're a BAD assembly programmer, which a lot of people are but they don't realize it so they bad-mouth the language) is just icing on the cake.  You should definitely teach yourself assembly, if you can find the time.</p></htmltext>
<tokenext>That 's a real shame !
But my impression is that for a long time now , college-level assembly instruction has consisted almost entirely of indoctrinating the students to believe that assembly language programming is difficult and unpleasant and must be avoided at all costs .
Which could n't be more wrong -- it 's AWESOME ! Even on the x86 with all its flaws , being able to have that kind of control makes everything more fun .
The fact that your code runs like a bat out of hell ( unless you 're a BAD assembly programmer , which a lot of people are but they do n't realize it so they bad-mouth the language ) is just icing on the cake .
You should definitely teach yourself assembly , if you can find the time .</tokentext>
<sentencetext>That's a real shame!
But my impression is that for a long time now, college-level assembly instruction has consisted almost entirely of indoctrinating the students to believe that assembly language programming is difficult and unpleasant and must be avoided at all costs.
Which couldn't be more wrong -- it's AWESOME!Even on the x86 with all its flaws, being able to have that kind of control makes everything more fun.
The fact that your code runs like a bat out of hell (unless you're a BAD assembly programmer, which a lot of people are but they don't realize it so they bad-mouth the language) is just icing on the cake.
You should definitely teach yourself assembly, if you can find the time.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772784</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773974</id>
	<title>Re:Premature optimization is evil... and stupid</title>
	<author>tomtefar</author>
	<datestamp>1263479160000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext>I have the following sticker on top of my display:

"Make it work before you make it fast!"

Saved me many hours of work.</htmltext>
<tokenext>I have the following sticker on top of my display : " Make it work before you make it fast !
" Saved me many hours of work .</tokentext>
<sentencetext>I have the following sticker on top of my display:

"Make it work before you make it fast!
"

Saved me many hours of work.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772742</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774296</id>
	<title>Re:Code in high-level</title>
	<author>SETIGuy</author>
	<datestamp>1263481620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>
In non-trivial single threaded application code on a modern processor, the CPU core is spending about 95\% of its time waiting on memory transfers.  To fix that problem, it can make sense to prefetch and reorder memory accesses.  Chances are you know better than your compiler how to do that.   It also makes sense to start more threads on a processor with multiple hardware threads so you can do things while waiting for memory.
</p><p>
Most programmers won't even bother to do that, because the processor is fast enough to do what they want without the optimization.  Only in heavy duty numerical code and in games does optimization by hand get done.  Where you really need top performance regardless of the platform, coders will write multiple versions of an core routine and time them to find what's best on the machine being used.
</p></htmltext>
<tokenext>In non-trivial single threaded application code on a modern processor , the CPU core is spending about 95 \ % of its time waiting on memory transfers .
To fix that problem , it can make sense to prefetch and reorder memory accesses .
Chances are you know better than your compiler how to do that .
It also makes sense to start more threads on a processor with multiple hardware threads so you can do things while waiting for memory .
Most programmers wo n't even bother to do that , because the processor is fast enough to do what they want without the optimization .
Only in heavy duty numerical code and in games does optimization by hand get done .
Where you really need top performance regardless of the platform , coders will write multiple versions of an core routine and time them to find what 's best on the machine being used .</tokentext>
<sentencetext>
In non-trivial single threaded application code on a modern processor, the CPU core is spending about 95\% of its time waiting on memory transfers.
To fix that problem, it can make sense to prefetch and reorder memory accesses.
Chances are you know better than your compiler how to do that.
It also makes sense to start more threads on a processor with multiple hardware threads so you can do things while waiting for memory.
Most programmers won't even bother to do that, because the processor is fast enough to do what they want without the optimization.
Only in heavy duty numerical code and in games does optimization by hand get done.
Where you really need top performance regardless of the platform, coders will write multiple versions of an core routine and time them to find what's best on the machine being used.
</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772964</id>
	<title>Re:Well that is /.'d</title>
	<author>MaskedSlacker</author>
	<datestamp>1263473340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Try waiting for it to full buffer?</p></htmltext>
<tokenext>Try waiting for it to full buffer ?</tokentext>
<sentencetext>Try waiting for it to full buffer?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772780</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30779476</id>
	<title>Re:Premature optimization is evil... and stupid</title>
	<author>chelberg</author>
	<datestamp>1263574140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>That's why they invented the size\_t type!</p><p>I use this type to ref all arrays, etc. whenever possible.</p></htmltext>
<tokenext>That 's why they invented the size \ _t type ! I use this type to ref all arrays , etc .
whenever possible .</tokentext>
<sentencetext>That's why they invented the size\_t type!I use this type to ref all arrays, etc.
whenever possible.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773114</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773114</id>
	<title>Re:Premature optimization is evil... and stupid</title>
	<author>marcansoft</author>
	<datestamp>1263474000000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>Using shift to multiply is often a great idea on most CPUs. On the other hand, just about every compiler will do that for you (even with optimization turned off I bet), so there's no reason to explicitly use shift in code (unless you're doing bit manipulation, or multiplying by 2^n where n is more convenient to use than 2^n). However, a much more important thing is to correctly specify signed/unsigned where needed. Signed arithmetic can make certain optimizations harder and in general it's harder to think about. One of my gripes about C is defaulting to signed for integer types, when most integers out there are only ever used to hold positive values.</p></htmltext>
<tokenext>Using shift to multiply is often a great idea on most CPUs .
On the other hand , just about every compiler will do that for you ( even with optimization turned off I bet ) , so there 's no reason to explicitly use shift in code ( unless you 're doing bit manipulation , or multiplying by 2 ^ n where n is more convenient to use than 2 ^ n ) .
However , a much more important thing is to correctly specify signed/unsigned where needed .
Signed arithmetic can make certain optimizations harder and in general it 's harder to think about .
One of my gripes about C is defaulting to signed for integer types , when most integers out there are only ever used to hold positive values .</tokentext>
<sentencetext>Using shift to multiply is often a great idea on most CPUs.
On the other hand, just about every compiler will do that for you (even with optimization turned off I bet), so there's no reason to explicitly use shift in code (unless you're doing bit manipulation, or multiplying by 2^n where n is more convenient to use than 2^n).
However, a much more important thing is to correctly specify signed/unsigned where needed.
Signed arithmetic can make certain optimizations harder and in general it's harder to think about.
One of my gripes about C is defaulting to signed for integer types, when most integers out there are only ever used to hold positive values.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772742</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775414</id>
	<title>Re:Code in high-level</title>
	<author>gandhi\_2</author>
	<datestamp>1263491640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>And who will write these tools for us? Code generators?</p></htmltext>
<tokenext>And who will write these tools for us ?
Code generators ?</tokentext>
<sentencetext>And who will write these tools for us?
Code generators?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774060</id>
	<title>rule of the code</title>
	<author>Bork</author>
	<datestamp>1263479880000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>Just write good clean code that works properly first.  The only time you optimize is after it has been profiled to see if there are troublesome spots.  The way CPUs run and how compilers are designed, there is very little need to do optimization.  Unless you have taken some serious courses of how the current CPU&rsquo;s work, you efforts will mostly result in bad code that gains you nothing in respect in speed.  Your time is better spent on writing CORRECT code.</p><p>The compilers are very intelligent in proper loop unrolling, rearranging branches, and moving instruction code around to keep the CPU pipeline full.   They will also look for unnecessary/redundant instruction within a loop and move them to a better spot.</p><p>One of the courses I took was programming for parallelism.  For extra credit, the instructor assigned a 27K x 27K matrix multiply; the person with the best time got a few extra points.  A lot of the class worked hard in trying to optimize their code to get better times, I got the best time by playing with the compiler flags.</p></htmltext>
<tokenext>Just write good clean code that works properly first .
The only time you optimize is after it has been profiled to see if there are troublesome spots .
The way CPUs run and how compilers are designed , there is very little need to do optimization .
Unless you have taken some serious courses of how the current CPU    s work , you efforts will mostly result in bad code that gains you nothing in respect in speed .
Your time is better spent on writing CORRECT code.The compilers are very intelligent in proper loop unrolling , rearranging branches , and moving instruction code around to keep the CPU pipeline full .
They will also look for unnecessary/redundant instruction within a loop and move them to a better spot.One of the courses I took was programming for parallelism .
For extra credit , the instructor assigned a 27K x 27K matrix multiply ; the person with the best time got a few extra points .
A lot of the class worked hard in trying to optimize their code to get better times , I got the best time by playing with the compiler flags .</tokentext>
<sentencetext>Just write good clean code that works properly first.
The only time you optimize is after it has been profiled to see if there are troublesome spots.
The way CPUs run and how compilers are designed, there is very little need to do optimization.
Unless you have taken some serious courses of how the current CPU’s work, you efforts will mostly result in bad code that gains you nothing in respect in speed.
Your time is better spent on writing CORRECT code.The compilers are very intelligent in proper loop unrolling, rearranging branches, and moving instruction code around to keep the CPU pipeline full.
They will also look for unnecessary/redundant instruction within a loop and move them to a better spot.One of the courses I took was programming for parallelism.
For extra credit, the instructor assigned a 27K x 27K matrix multiply; the person with the best time got a few extra points.
A lot of the class worked hard in trying to optimize their code to get better times, I got the best time by playing with the compiler flags.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772944</id>
	<title>Re:Fast forward...</title>
	<author>Anonymous</author>
	<datestamp>1263473280000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>0</modscore>
	<htmltext><p>You mean you actually got it to play instead of stare at a play button? Can we please kill Flash already.</p></htmltext>
<tokenext>You mean you actually got it to play instead of stare at a play button ?
Can we please kill Flash already .</tokentext>
<sentencetext>You mean you actually got it to play instead of stare at a play button?
Can we please kill Flash already.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772618</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773342</id>
	<title>Re:Code in high-level</title>
	<author>oldhack</author>
	<datestamp>1263475380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>
Yeah, probably makes sense only for DSPs and microcontrollers.  But then isn't 68k used as microcontrollers now?
</p><p>
We used to say there were two many layers of shit.  Now it's truly "turtles all the way down."</p></htmltext>
<tokenext>Yeah , probably makes sense only for DSPs and microcontrollers .
But then is n't 68k used as microcontrollers now ?
We used to say there were two many layers of shit .
Now it 's truly " turtles all the way down .
"</tokentext>
<sentencetext>
Yeah, probably makes sense only for DSPs and microcontrollers.
But then isn't 68k used as microcontrollers now?
We used to say there were two many layers of shit.
Now it's truly "turtles all the way down.
"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775740</id>
	<title>Re:Code in high-level</title>
	<author>stinerman</author>
	<datestamp>1263495720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Odd.  Assembler was a required course at my college for CS/CEG students.  Of course, they taught m68k assembler because it was a lot easier, but that was our class.</p></htmltext>
<tokenext>Odd .
Assembler was a required course at my college for CS/CEG students .
Of course , they taught m68k assembler because it was a lot easier , but that was our class .</tokentext>
<sentencetext>Odd.
Assembler was a required course at my college for CS/CEG students.
Of course, they taught m68k assembler because it was a lot easier, but that was our class.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772784</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30783150</id>
	<title>Torrent for video</title>
	<author>Anonymous</author>
	<datestamp>1263546900000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>It took me hours to download this video, trying several different mirrors. So, here's a bittorrent version of the <a href="http://thepiratebay.org/torrent/5282386" title="thepiratebay.org" rel="nofollow">video</a> [thepiratebay.org]</p></htmltext>
<tokenext>It took me hours to download this video , trying several different mirrors .
So , here 's a bittorrent version of the video [ thepiratebay.org ]</tokentext>
<sentencetext>It took me hours to download this video, trying several different mirrors.
So, here's a bittorrent version of the video [thepiratebay.org]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772618</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774808</id>
	<title>Lots of low Slashdot IDs commenting on this...</title>
	<author>slagheap</author>
	<datestamp>1263485580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Let's start the pissing contest:</p><p>I have a 6-digit slashdot ID.  Beat that you newbs!</p></htmltext>
<tokenext>Let 's start the pissing contest : I have a 6-digit slashdot ID .
Beat that you newbs !</tokentext>
<sentencetext>Let's start the pissing contest:I have a 6-digit slashdot ID.
Beat that you newbs!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774630</id>
	<title>In C/C++ shift is not the same as multiply/divide</title>
	<author>perpenso</author>
	<datestamp>1263484320000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><blockquote><div><p>Using shift to multiply is often a great idea on most CPUs.</p></div></blockquote><p>

In C/C++ shift is not the same as multiply/divide by 2.  Multiplication and division operators have a different precedence level than shift operators.  Not only is there the possibility of poor optimization but such a substitution may lead to a computational error.  For example mul/div has a higher precedence than add/sub, but shift has a lower precedence:<br> <br>

<tt>
    printf(&quot; 3 *  2  + 1 = \%d\n&quot;,  3 *  2  + 1);<br>
    printf(&quot; 3 &lt;&lt; 1  + 1 = \%d\n&quot;,  3 &lt;&lt; 1  + 1);<br>
    printf(&quot;(3 &lt;&lt; 1) + 1 = \%d\n&quot;, (3 &lt;&lt; 1) + 1);<br> <br>

     3 *  2  + 1 = 7<br>
     3 &lt;&lt; 1  + 1 = 12<br>
    (3 &lt;&lt; 1) + 1 = 7<br>
</tt>
<br>
--<br>
<a href="http://www.perpenso.com/calc/" title="perpenso.com" rel="nofollow">Perpenso Calc</a> [perpenso.com] for iPhone and iPod touch, scientific and bill/tip calculator, fractions, complex numbers, RPN</p></div>
	</htmltext>
<tokenext>Using shift to multiply is often a great idea on most CPUs .
In C/C + + shift is not the same as multiply/divide by 2 .
Multiplication and division operators have a different precedence level than shift operators .
Not only is there the possibility of poor optimization but such a substitution may lead to a computational error .
For example mul/div has a higher precedence than add/sub , but shift has a lower precedence : printf ( " 3 * 2 + 1 = \ % d \ n " , 3 * 2 + 1 ) ; printf ( " 3 printf ( " ( 3 3 * 2 + 1 = 7 3 ( 3 -- Perpenso Calc [ perpenso.com ] for iPhone and iPod touch , scientific and bill/tip calculator , fractions , complex numbers , RPN</tokentext>
<sentencetext>Using shift to multiply is often a great idea on most CPUs.
In C/C++ shift is not the same as multiply/divide by 2.
Multiplication and division operators have a different precedence level than shift operators.
Not only is there the possibility of poor optimization but such a substitution may lead to a computational error.
For example mul/div has a higher precedence than add/sub, but shift has a lower precedence: 


    printf(" 3 *  2  + 1 = \%d\n",  3 *  2  + 1);
    printf(" 3 
    printf("(3  

     3 *  2  + 1 = 7
     3 
    (3 


--
Perpenso Calc [perpenso.com] for iPhone and iPod touch, scientific and bill/tip calculator, fractions, complex numbers, RPN
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773114</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773982</id>
	<title>Re:Code in high-level</title>
	<author>Kjella</author>
	<datestamp>1263479280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>I wanted to take ASM in college. I was the only student who showed up for the class and the class was canceled. Since most of the programming classes was Java-centric, no one wanted to get their hands dirty under the hood.</p></div><p>I'm probably going to need an asbestos suit for this post, but to be honest I don't think assembler is a good programming language for humans. My impression is that they absolutely don't want to pollute the instruction set with instructions unless there's a performance benefit to doing so. But what it means in practice is that anyone I've seen writing advanced assembly relies on lots and lots of macros to do essential things, because the combination of instructions is useful but there's no language construct. For example, in general you JMP everywhere which is the low-level equivalent of GOTO and you use that to create the equivalent of FOR and WHILE etc. which is neat to have seen once but gets quite tedious to do over and over.</p><p>Most of the real world issues I run into, aren't of the type "yeah with an assembler optimization here we could squeeze another 2\% out of it", It's stuff like "wtf why are you putting that inside the loop?" or "why are you doing this processing one by one when a batch update would do this 1000x faster?" If you got a clue on what's happening in C, if you know when memory is allocated/deallocated and that the basic operations you do makes sense, you'll write better code than 90\% of the developers out there anyway.</p></div>
	</htmltext>
<tokenext>I wanted to take ASM in college .
I was the only student who showed up for the class and the class was canceled .
Since most of the programming classes was Java-centric , no one wanted to get their hands dirty under the hood.I 'm probably going to need an asbestos suit for this post , but to be honest I do n't think assembler is a good programming language for humans .
My impression is that they absolutely do n't want to pollute the instruction set with instructions unless there 's a performance benefit to doing so .
But what it means in practice is that anyone I 've seen writing advanced assembly relies on lots and lots of macros to do essential things , because the combination of instructions is useful but there 's no language construct .
For example , in general you JMP everywhere which is the low-level equivalent of GOTO and you use that to create the equivalent of FOR and WHILE etc .
which is neat to have seen once but gets quite tedious to do over and over.Most of the real world issues I run into , are n't of the type " yeah with an assembler optimization here we could squeeze another 2 \ % out of it " , It 's stuff like " wtf why are you putting that inside the loop ?
" or " why are you doing this processing one by one when a batch update would do this 1000x faster ?
" If you got a clue on what 's happening in C , if you know when memory is allocated/deallocated and that the basic operations you do makes sense , you 'll write better code than 90 \ % of the developers out there anyway .</tokentext>
<sentencetext>I wanted to take ASM in college.
I was the only student who showed up for the class and the class was canceled.
Since most of the programming classes was Java-centric, no one wanted to get their hands dirty under the hood.I'm probably going to need an asbestos suit for this post, but to be honest I don't think assembler is a good programming language for humans.
My impression is that they absolutely don't want to pollute the instruction set with instructions unless there's a performance benefit to doing so.
But what it means in practice is that anyone I've seen writing advanced assembly relies on lots and lots of macros to do essential things, because the combination of instructions is useful but there's no language construct.
For example, in general you JMP everywhere which is the low-level equivalent of GOTO and you use that to create the equivalent of FOR and WHILE etc.
which is neat to have seen once but gets quite tedious to do over and over.Most of the real world issues I run into, aren't of the type "yeah with an assembler optimization here we could squeeze another 2\% out of it", It's stuff like "wtf why are you putting that inside the loop?
" or "why are you doing this processing one by one when a batch update would do this 1000x faster?
" If you got a clue on what's happening in C, if you know when memory is allocated/deallocated and that the basic operations you do makes sense, you'll write better code than 90\% of the developers out there anyway.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772784</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640</id>
	<title>Code in high-level</title>
	<author>elh\_inny</author>
	<datestamp>1263471720000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>Iit doesn't make sense to code in ASM anymore.<br>With computing expanding towards more and more parallelism, I can clearly see that one should learn to start coding in the most abstract of way and let the tools do the optimisation for him...</p></htmltext>
<tokenext>Iit does n't make sense to code in ASM anymore.With computing expanding towards more and more parallelism , I can clearly see that one should learn to start coding in the most abstract of way and let the tools do the optimisation for him.. .</tokentext>
<sentencetext>Iit doesn't make sense to code in ASM anymore.With computing expanding towards more and more parallelism, I can clearly see that one should learn to start coding in the most abstract of way and let the tools do the optimisation for him...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774564</id>
	<title>Re:Code in high-level</title>
	<author>mpgalvin</author>
	<datestamp>1263483660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>How will they learn compilers?  or Driver design?</p><p>"Gentlemen, I have met the code-monkeys and it is us."</p></htmltext>
<tokenext>How will they learn compilers ?
or Driver design ?
" Gentlemen , I have met the code-monkeys and it is us .
"</tokentext>
<sentencetext>How will they learn compilers?
or Driver design?
"Gentlemen, I have met the code-monkeys and it is us.
"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772784</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30776028</id>
	<title>Re:Premature optimization is evil... and stupid</title>
	<author>Anonymous</author>
	<datestamp>1263586320000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>Having spent 4 years being one of the primary developers of Apple's main performance analysis tools (CHUD, not Instruments) and having helped developers from nearly every field imaginable tune their applications for performance, I can honestly say that regardless of your performance criteria, you shouldn't be doing anything special for optimization when you first write a program.  Some thought should be given to the architecture and overall data flow of the program and how that design might have some high-level performance limits, but certainly no code should be written using explicit vector operations and all loops should be written for clarity.  Scalability by partitioning the work is one of those items that can generally be incorporated into the program's architecture if the program lends itself to it, but most other performance-related changes depend on specific usage cases.  Trying to guess those while writing the application logic relies solely on intuition which is usually wrong.</p><p>After you've written and debugged the application, profiling and tracing is the prime way for finding \_where\_ to do optimization.  Your experiences have been tainted by the poor quality of tools known by the larger OSS community, but many good tools are free (as in beer) for many OSes (Shark for OS X as an example) while others cost a bit (VTune for Linux or Windows).  Even large, complex multi-threaded programs can be profiled and tuned with decent profilers.  I know for a fact that Shark is used to tune large applications such as Photoshop, Final Cut Pro, Mathematica, and basically every application, daemon, and framework included in OS X.</p><p>What do you do if there really isn't much of a hotspot?  Quake 3 was an example where the time was spread out over many C++ methods so no one hotspot really showed up.  Using features available in the better profiling tools, the collected samples could be attributed up the stack to the actual algorithms instead of things like simple accessors.  Once you do that, the problems become much more obvious.</p><p>What do you do after the application has been written and a major performance problem is found that would require an architectural change?  Well, you change the architecture.  The reason for not doing it during the initial design is that predicting performance issues is near impossible even for those of us who have spent years doing it as a full time job.  Sure, you have to throw away some code or revisit the design to fix the performance issues, but that's a normal part of software design.  You try an approach, find out why it won't work, and use that knowledge to come up with a new approach.</p><p>That largest failing I see from my experiences have been the lack of understanding by management and engineers that performance is a very iterative part of software design and that it happens late in the game.  Frequently, schedules get set without consideration for the amount of time required to do performance analysis, let alone optimization.  Then you have all the engineers who either try to optimize everything they encounter and end up wasting lots of time, or they do the initial implementation and never do any profiling.</p><p>Ultimately, if you try to build performance into a design very early, you end up with a big, messy, unmaintainable code base that isn't actually all that fast.  If you build the design cleanly and then optimize the sections that actually need it, you have a most maintainable code base that meets the requirements.  Be the latter.</p></htmltext>
<tokenext>Having spent 4 years being one of the primary developers of Apple 's main performance analysis tools ( CHUD , not Instruments ) and having helped developers from nearly every field imaginable tune their applications for performance , I can honestly say that regardless of your performance criteria , you should n't be doing anything special for optimization when you first write a program .
Some thought should be given to the architecture and overall data flow of the program and how that design might have some high-level performance limits , but certainly no code should be written using explicit vector operations and all loops should be written for clarity .
Scalability by partitioning the work is one of those items that can generally be incorporated into the program 's architecture if the program lends itself to it , but most other performance-related changes depend on specific usage cases .
Trying to guess those while writing the application logic relies solely on intuition which is usually wrong.After you 've written and debugged the application , profiling and tracing is the prime way for finding \ _where \ _ to do optimization .
Your experiences have been tainted by the poor quality of tools known by the larger OSS community , but many good tools are free ( as in beer ) for many OSes ( Shark for OS X as an example ) while others cost a bit ( VTune for Linux or Windows ) .
Even large , complex multi-threaded programs can be profiled and tuned with decent profilers .
I know for a fact that Shark is used to tune large applications such as Photoshop , Final Cut Pro , Mathematica , and basically every application , daemon , and framework included in OS X.What do you do if there really is n't much of a hotspot ?
Quake 3 was an example where the time was spread out over many C + + methods so no one hotspot really showed up .
Using features available in the better profiling tools , the collected samples could be attributed up the stack to the actual algorithms instead of things like simple accessors .
Once you do that , the problems become much more obvious.What do you do after the application has been written and a major performance problem is found that would require an architectural change ?
Well , you change the architecture .
The reason for not doing it during the initial design is that predicting performance issues is near impossible even for those of us who have spent years doing it as a full time job .
Sure , you have to throw away some code or revisit the design to fix the performance issues , but that 's a normal part of software design .
You try an approach , find out why it wo n't work , and use that knowledge to come up with a new approach.That largest failing I see from my experiences have been the lack of understanding by management and engineers that performance is a very iterative part of software design and that it happens late in the game .
Frequently , schedules get set without consideration for the amount of time required to do performance analysis , let alone optimization .
Then you have all the engineers who either try to optimize everything they encounter and end up wasting lots of time , or they do the initial implementation and never do any profiling.Ultimately , if you try to build performance into a design very early , you end up with a big , messy , unmaintainable code base that is n't actually all that fast .
If you build the design cleanly and then optimize the sections that actually need it , you have a most maintainable code base that meets the requirements .
Be the latter .</tokentext>
<sentencetext>Having spent 4 years being one of the primary developers of Apple's main performance analysis tools (CHUD, not Instruments) and having helped developers from nearly every field imaginable tune their applications for performance, I can honestly say that regardless of your performance criteria, you shouldn't be doing anything special for optimization when you first write a program.
Some thought should be given to the architecture and overall data flow of the program and how that design might have some high-level performance limits, but certainly no code should be written using explicit vector operations and all loops should be written for clarity.
Scalability by partitioning the work is one of those items that can generally be incorporated into the program's architecture if the program lends itself to it, but most other performance-related changes depend on specific usage cases.
Trying to guess those while writing the application logic relies solely on intuition which is usually wrong.After you've written and debugged the application, profiling and tracing is the prime way for finding \_where\_ to do optimization.
Your experiences have been tainted by the poor quality of tools known by the larger OSS community, but many good tools are free (as in beer) for many OSes (Shark for OS X as an example) while others cost a bit (VTune for Linux or Windows).
Even large, complex multi-threaded programs can be profiled and tuned with decent profilers.
I know for a fact that Shark is used to tune large applications such as Photoshop, Final Cut Pro, Mathematica, and basically every application, daemon, and framework included in OS X.What do you do if there really isn't much of a hotspot?
Quake 3 was an example where the time was spread out over many C++ methods so no one hotspot really showed up.
Using features available in the better profiling tools, the collected samples could be attributed up the stack to the actual algorithms instead of things like simple accessors.
Once you do that, the problems become much more obvious.What do you do after the application has been written and a major performance problem is found that would require an architectural change?
Well, you change the architecture.
The reason for not doing it during the initial design is that predicting performance issues is near impossible even for those of us who have spent years doing it as a full time job.
Sure, you have to throw away some code or revisit the design to fix the performance issues, but that's a normal part of software design.
You try an approach, find out why it won't work, and use that knowledge to come up with a new approach.That largest failing I see from my experiences have been the lack of understanding by management and engineers that performance is a very iterative part of software design and that it happens late in the game.
Frequently, schedules get set without consideration for the amount of time required to do performance analysis, let alone optimization.
Then you have all the engineers who either try to optimize everything they encounter and end up wasting lots of time, or they do the initial implementation and never do any profiling.Ultimately, if you try to build performance into a design very early, you end up with a big, messy, unmaintainable code base that isn't actually all that fast.
If you build the design cleanly and then optimize the sections that actually need it, you have a most maintainable code base that meets the requirements.
Be the latter.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774118</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774408</id>
	<title>Re:Code in high-level</title>
	<author>Anonymous</author>
	<datestamp>1263482460000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>That's a shame, because Java is actually a great language to learn the principles of assembly. It's very easy to disassemble compiled class files to bytecode, and thus easy to map the stack-based instructions to the Java source.</p></htmltext>
<tokenext>That 's a shame , because Java is actually a great language to learn the principles of assembly .
It 's very easy to disassemble compiled class files to bytecode , and thus easy to map the stack-based instructions to the Java source .</tokentext>
<sentencetext>That's a shame, because Java is actually a great language to learn the principles of assembly.
It's very easy to disassemble compiled class files to bytecode, and thus easy to map the stack-based instructions to the Java source.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772784</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30776696</id>
	<title>Re:rule of the code</title>
	<author>wirelessbuzzers</author>
	<datestamp>1263552060000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>One of the courses I took was programming for parallelism. For extra credit, the instructor assigned a 27K x 27K matrix multiply; the person with the best time got a few extra points. A lot of the class worked hard in trying to optimize their code to get better times, I got the best time by playing with the compiler flags.</p></div><p>Really?  Because I had a similar assignment (make Strassen's algorithm as fast as possible, in the 5-10k range) in my algorithms class a while back.  I found that the key to a blazing fast program was careful memory layout: divide the matrix into tiles that fit into L1, transpose the matrix to avoid striding problems.  Vectorizing the inner loops got another large factor.  Compiling with -msse3 -march=native -O3 helped, but the other two were critical and took a fair amount of effort.</p></div>
	</htmltext>
<tokenext>One of the courses I took was programming for parallelism .
For extra credit , the instructor assigned a 27K x 27K matrix multiply ; the person with the best time got a few extra points .
A lot of the class worked hard in trying to optimize their code to get better times , I got the best time by playing with the compiler flags.Really ?
Because I had a similar assignment ( make Strassen 's algorithm as fast as possible , in the 5-10k range ) in my algorithms class a while back .
I found that the key to a blazing fast program was careful memory layout : divide the matrix into tiles that fit into L1 , transpose the matrix to avoid striding problems .
Vectorizing the inner loops got another large factor .
Compiling with -msse3 -march = native -O3 helped , but the other two were critical and took a fair amount of effort .</tokentext>
<sentencetext>One of the courses I took was programming for parallelism.
For extra credit, the instructor assigned a 27K x 27K matrix multiply; the person with the best time got a few extra points.
A lot of the class worked hard in trying to optimize their code to get better times, I got the best time by playing with the compiler flags.Really?
Because I had a similar assignment (make Strassen's algorithm as fast as possible, in the 5-10k range) in my algorithms class a while back.
I found that the key to a blazing fast program was careful memory layout: divide the matrix into tiles that fit into L1, transpose the matrix to avoid striding problems.
Vectorizing the inner loops got another large factor.
Compiling with -msse3 -march=native -O3 helped, but the other two were critical and took a fair amount of effort.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774060</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774308</id>
	<title>Re:Code in high-level</title>
	<author>Anonymous</author>
	<datestamp>1263481740000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Actually, a for or while construct is trivially easy in asm, almost easier than in c.</p><p>for construct: for(i=amount;i--;)<br>mov ecx,amount<br>loop:<nobr> <wbr></nobr>...inner loop...<br>dec ecx<br>jnz loop</p><p>while construct: while(amount!=0)<br>loop:<nobr> <wbr></nobr>...inner loop...<br>cmp amount,0<br>jnz loop</p></htmltext>
<tokenext>Actually , a for or while construct is trivially easy in asm , almost easier than in c.for construct : for ( i = amount ; i-- ; ) mov ecx,amountloop : ...inner loop...dec ecxjnz loopwhile construct : while ( amount ! = 0 ) loop : ...inner loop...cmp amount,0jnz loop</tokentext>
<sentencetext>Actually, a for or while construct is trivially easy in asm, almost easier than in c.for construct: for(i=amount;i--;)mov ecx,amountloop: ...inner loop...dec ecxjnz loopwhile construct: while(amount!=0)loop: ...inner loop...cmp amount,0jnz loop</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773982</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774784</id>
	<title>Re:Fast forward...</title>
	<author>Anonymous</author>
	<datestamp>1263485460000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>It keeps stalling for me around the 17 minute mark<nobr> <wbr></nobr>:(</p></htmltext>
<tokenext>It keeps stalling for me around the 17 minute mark : (</tokentext>
<sentencetext>It keeps stalling for me around the 17 minute mark :(</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772618</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30776228</id>
	<title>More of his talks?</title>
	<author>cmason</author>
	<datestamp>1263589140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>This talk was great!  But, I'd love to have seen some of the other ones Cliff offered (particularly the GC one).  A quick search of google video/youtube turns up only his lock free <a href="http://video.google.com/videoplay?docid=2139967204534450862&amp;ei=iB9QS9XkDYGwqAP7w8T9CA&amp;q=cliff+click&amp;client=firefox-a#" title="google.com">hash table talk</a> [google.com], which is great, but I've seen it already.</p><p>Anyone have links to more of this guy?</p><p>

-c</p></htmltext>
<tokenext>This talk was great !
But , I 'd love to have seen some of the other ones Cliff offered ( particularly the GC one ) .
A quick search of google video/youtube turns up only his lock free hash table talk [ google.com ] , which is great , but I 've seen it already.Anyone have links to more of this guy ?
-c</tokentext>
<sentencetext>This talk was great!
But, I'd love to have seen some of the other ones Cliff offered (particularly the GC one).
A quick search of google video/youtube turns up only his lock free hash table talk [google.com], which is great, but I've seen it already.Anyone have links to more of this guy?
-c</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30789170</id>
	<title>Re:It's not just x86</title>
	<author>Anonymous</author>
	<datestamp>1263643020000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Goretex A8?</p><p>OMG wearable processors are here!!</p></htmltext>
<tokenext>Goretex A8 ? OMG wearable processors are here !
!</tokentext>
<sentencetext>Goretex A8?OMG wearable processors are here!
!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773542</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772742</id>
	<title>Premature optimization is evil... and stupid</title>
	<author>Anonymous</author>
	<datestamp>1263472260000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>That's the main reason why I want to shoot people who write "clever" code on the first pass. Always make the rough draft of a program clean and readable. If (and only if!) you need to optimize it, use a profiler to see what actually needs work. If you do things like manually unroll loops where the body is only executed 23 times during the program's whole lifetime, or use shift to multiply because you read somewhere that it's fast, then don't be surprised when your coworkers revoke your oxygen bit.</p></htmltext>
<tokenext>That 's the main reason why I want to shoot people who write " clever " code on the first pass .
Always make the rough draft of a program clean and readable .
If ( and only if !
) you need to optimize it , use a profiler to see what actually needs work .
If you do things like manually unroll loops where the body is only executed 23 times during the program 's whole lifetime , or use shift to multiply because you read somewhere that it 's fast , then do n't be surprised when your coworkers revoke your oxygen bit .</tokentext>
<sentencetext>That's the main reason why I want to shoot people who write "clever" code on the first pass.
Always make the rough draft of a program clean and readable.
If (and only if!
) you need to optimize it, use a profiler to see what actually needs work.
If you do things like manually unroll loops where the body is only executed 23 times during the program's whole lifetime, or use shift to multiply because you read somewhere that it's fast, then don't be surprised when your coworkers revoke your oxygen bit.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775570</id>
	<title>...except for the uControllers I use.</title>
	<author>podom</author>
	<datestamp>1263493560000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>I watched about half of his presentation. I was amused because on a lot of the slides he says something like "except on really low end embedded CPUs." I spend a lot of my time programming (frequently in assembly) for these exact very low end CPUs. I haven't had to do much with 8-bit cores, fortunately, but I've been doing a lot of programming on a 16-bit microcontroller lately (EMC eSL).</p><p>I suspect the way I'm programming these chips is a lot like how you would have programmed a desktop CPU in about 1980, except that I get to run all the tools on a computer with a clock speed 100x the chip I'm programming (and at least 1000x the performance). I am constantly amazed by how little we pay for these devices: ~10 Mips, 32k RAM, 128k Program memory, 1MB data memory and they're $1.</p><p>But they do have a 3-stage pipeline, so I guess some of what Dr. Cliff says still applies.</p></htmltext>
<tokenext>I watched about half of his presentation .
I was amused because on a lot of the slides he says something like " except on really low end embedded CPUs .
" I spend a lot of my time programming ( frequently in assembly ) for these exact very low end CPUs .
I have n't had to do much with 8-bit cores , fortunately , but I 've been doing a lot of programming on a 16-bit microcontroller lately ( EMC eSL ) .I suspect the way I 'm programming these chips is a lot like how you would have programmed a desktop CPU in about 1980 , except that I get to run all the tools on a computer with a clock speed 100x the chip I 'm programming ( and at least 1000x the performance ) .
I am constantly amazed by how little we pay for these devices : ~ 10 Mips , 32k RAM , 128k Program memory , 1MB data memory and they 're $ 1.But they do have a 3-stage pipeline , so I guess some of what Dr. Cliff says still applies .</tokentext>
<sentencetext>I watched about half of his presentation.
I was amused because on a lot of the slides he says something like "except on really low end embedded CPUs.
" I spend a lot of my time programming (frequently in assembly) for these exact very low end CPUs.
I haven't had to do much with 8-bit cores, fortunately, but I've been doing a lot of programming on a 16-bit microcontroller lately (EMC eSL).I suspect the way I'm programming these chips is a lot like how you would have programmed a desktop CPU in about 1980, except that I get to run all the tools on a computer with a clock speed 100x the chip I'm programming (and at least 1000x the performance).
I am constantly amazed by how little we pay for these devices: ~10 Mips, 32k RAM, 128k Program memory, 1MB data memory and they're $1.But they do have a 3-stage pipeline, so I guess some of what Dr. Cliff says still applies.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774698</id>
	<title>Re:Premature optimization is evil... and stupid</title>
	<author>smash</author>
	<datestamp>1263484860000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><blockquote><div><p>by not thinking about performance you can make it expensive or impossible to improve things later without a substantial rewrite.</p></div> </blockquote><p>
"Not thinking about performance" is different from writing in high level first.
</p><p>
Get the algorithm right first, THEN optimise hot spots.
</p><p>
Starting out with ASM makes it a lot more time consuming/difficult to get many different algorithms written, debugged and tested.  The time you spend doing that is time better spent testing/developing a better algorithm.  Only once you get the algorithm correct should you break out the assembler for the hotspots WITHIN that algorithm.
</p><p>
If you're writing such shitty code that its "impossible to optimize later" then I don't think starting out in ASM will help you.  You'll just have slightly faster shitty code.</p></div>
	</htmltext>
<tokenext>by not thinking about performance you can make it expensive or impossible to improve things later without a substantial rewrite .
" Not thinking about performance " is different from writing in high level first .
Get the algorithm right first , THEN optimise hot spots .
Starting out with ASM makes it a lot more time consuming/difficult to get many different algorithms written , debugged and tested .
The time you spend doing that is time better spent testing/developing a better algorithm .
Only once you get the algorithm correct should you break out the assembler for the hotspots WITHIN that algorithm .
If you 're writing such shitty code that its " impossible to optimize later " then I do n't think starting out in ASM will help you .
You 'll just have slightly faster shitty code .</tokentext>
<sentencetext>by not thinking about performance you can make it expensive or impossible to improve things later without a substantial rewrite.
"Not thinking about performance" is different from writing in high level first.
Get the algorithm right first, THEN optimise hot spots.
Starting out with ASM makes it a lot more time consuming/difficult to get many different algorithms written, debugged and tested.
The time you spend doing that is time better spent testing/developing a better algorithm.
Only once you get the algorithm correct should you break out the assembler for the hotspots WITHIN that algorithm.
If you're writing such shitty code that its "impossible to optimize later" then I don't think starting out in ASM will help you.
You'll just have slightly faster shitty code.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773896</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772820</id>
	<title>Re:Code in high-level</title>
	<author>Anonymous</author>
	<datestamp>1263472620000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext>Someone has to write those tools.</htmltext>
<tokenext>Someone has to write those tools .</tokentext>
<sentencetext>Someone has to write those tools.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30787296</id>
	<title>Re:...except for the uControllers I use.</title>
	<author>Mr Z</author>
	<datestamp>1263571740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I actually got the impression he was including a lot of the 32-bit microcontrollers out there too.  If you look at a lot of the embedded ARMs that show up in various SoCs, his "low end CPU" comments apply to those, too.  Simple pipeline, in-order execution, etc.</htmltext>
<tokenext>I actually got the impression he was including a lot of the 32-bit microcontrollers out there too .
If you look at a lot of the embedded ARMs that show up in various SoCs , his " low end CPU " comments apply to those , too .
Simple pipeline , in-order execution , etc .</tokentext>
<sentencetext>I actually got the impression he was including a lot of the 32-bit microcontrollers out there too.
If you look at a lot of the embedded ARMs that show up in various SoCs, his "low end CPU" comments apply to those, too.
Simple pipeline, in-order execution, etc.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775570</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773070</id>
	<title>Re:Premature optimization is evil... and stupid</title>
	<author>Anonymous</author>
	<datestamp>1263473760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>Always make the rough draft of a program clean and readable.</i></p><p>Not only that, but if the optimized version is much less readable than the initial version, consider keeping and maintaining *both* versions. You can run tests to compare the output of each version, replace the fast, not-obviously-incorrect version with the slow, obviously-not-incorrect version if you hit a bug and see if it's still there, etc.</p><p>(MS did or does this with Excel; at least until recently, and perhaps still, the recomputation engine for the spreadsheet was hand-tuned assembly. However, for testing and development reasons, they also had a much slower, high-level-language version.)</p></htmltext>
<tokenext>Always make the rough draft of a program clean and readable.Not only that , but if the optimized version is much less readable than the initial version , consider keeping and maintaining * both * versions .
You can run tests to compare the output of each version , replace the fast , not-obviously-incorrect version with the slow , obviously-not-incorrect version if you hit a bug and see if it 's still there , etc .
( MS did or does this with Excel ; at least until recently , and perhaps still , the recomputation engine for the spreadsheet was hand-tuned assembly .
However , for testing and development reasons , they also had a much slower , high-level-language version .
)</tokentext>
<sentencetext>Always make the rough draft of a program clean and readable.Not only that, but if the optimized version is much less readable than the initial version, consider keeping and maintaining *both* versions.
You can run tests to compare the output of each version, replace the fast, not-obviously-incorrect version with the slow, obviously-not-incorrect version if you hit a bug and see if it's still there, etc.
(MS did or does this with Excel; at least until recently, and perhaps still, the recomputation engine for the spreadsheet was hand-tuned assembly.
However, for testing and development reasons, they also had a much slower, high-level-language version.
)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772742</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774746</id>
	<title>Re:Premature optimization is evil... and stupid</title>
	<author>smash</author>
	<datestamp>1263485220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>On the contrary, i'd be more concerned that the medical software is CORRECT.  You can throw more hardware at the problem to make it faster.  You can't throw more hardware at the problem to correct bugs.</htmltext>
<tokenext>On the contrary , i 'd be more concerned that the medical software is CORRECT .
You can throw more hardware at the problem to make it faster .
You ca n't throw more hardware at the problem to correct bugs .</tokentext>
<sentencetext>On the contrary, i'd be more concerned that the medical software is CORRECT.
You can throw more hardware at the problem to make it faster.
You can't throw more hardware at the problem to correct bugs.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774118</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773134</id>
	<title>Re:Fast forward...</title>
	<author>Gazzonyx</author>
	<datestamp>1263474060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>You're lucky that you didn't get it to play; mine played to six minutes and then just stopped and won't play or let me skip past that.</htmltext>
<tokenext>You 're lucky that you did n't get it to play ; mine played to six minutes and then just stopped and wo n't play or let me skip past that .</tokentext>
<sentencetext>You're lucky that you didn't get it to play; mine played to six minutes and then just stopped and won't play or let me skip past that.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772944</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773896</id>
	<title>Re:Premature optimization is evil... and stupid</title>
	<author>AuMatar</author>
	<datestamp>1263478440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The opposite problem also exists though-  by not thinking about performance you can make it expensive or impossible to improve things later without a substantial rewrite.  Saying optimize at the end is just as stupid and just as costly.  Learning when to care about what level is part of the art of programming.  (Although on your specific examples I'll agree with you-  especially since I would expect anything but a really old compiler to do mult-&gt;shift conversions for you, so you may as well use the more maintainable and readable multiply.)</p></htmltext>
<tokenext>The opposite problem also exists though- by not thinking about performance you can make it expensive or impossible to improve things later without a substantial rewrite .
Saying optimize at the end is just as stupid and just as costly .
Learning when to care about what level is part of the art of programming .
( Although on your specific examples I 'll agree with you- especially since I would expect anything but a really old compiler to do mult- &gt; shift conversions for you , so you may as well use the more maintainable and readable multiply .
)</tokentext>
<sentencetext>The opposite problem also exists though-  by not thinking about performance you can make it expensive or impossible to improve things later without a substantial rewrite.
Saying optimize at the end is just as stupid and just as costly.
Learning when to care about what level is part of the art of programming.
(Although on your specific examples I'll agree with you-  especially since I would expect anything but a really old compiler to do mult-&gt;shift conversions for you, so you may as well use the more maintainable and readable multiply.
)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772742</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30777016</id>
	<title>Re:rule of the code</title>
	<author>Anonymous</author>
	<datestamp>1263556440000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>Thanks for rolling out the received wisdom.  Like most received wisdom, it is of course wrong, but applies in enough cases and for enough people to be useful.</p><p>What you must bear is mind though is that something like Amdahl's law applies to hotspot optimization.  Even if you make the hotspot take zero time, the speed of your code is still limited by the performance of the non-hotspots.  This leads to a phenomenon I name "uniformly slow code".</p><p>If your mission is actually to make fast code you need to start from scratch, and create a perfect algorithm, given the hardware constraints.  But this is too hard, and too much work in most cases to be worthwhile, hence the hotspot advice.</p><p>As for "your time is better off writing correct code"<nobr> <wbr></nobr>... well, if you can't write correct code, go become an actor or something.  This is a pre-requisite, not a target.</p><p>You have a lot of confidence in the ability of compilers to work magic.  The average compiler is actually pretty bad at all this stuff, and you can forget it on any slightly novel architecture (e.g. PPC vs x86).  Knowing how to do it yourself is still useful - or essential, depending on what you're trying to do.  Every compiler I've used recently has bugs in it regarding floating-point re-ordering anyway (and I mean *every* compiler - gcc 4 is even worse than MSVC at this) so don't lean on this too heavily.  Remember what you said about CORRECT code?  It's a lot easier to tweak a flag than it is to robustly test that the algorithm is still working properly.</p></htmltext>
<tokenext>Thanks for rolling out the received wisdom .
Like most received wisdom , it is of course wrong , but applies in enough cases and for enough people to be useful.What you must bear is mind though is that something like Amdahl 's law applies to hotspot optimization .
Even if you make the hotspot take zero time , the speed of your code is still limited by the performance of the non-hotspots .
This leads to a phenomenon I name " uniformly slow code " .If your mission is actually to make fast code you need to start from scratch , and create a perfect algorithm , given the hardware constraints .
But this is too hard , and too much work in most cases to be worthwhile , hence the hotspot advice.As for " your time is better off writing correct code " ... well , if you ca n't write correct code , go become an actor or something .
This is a pre-requisite , not a target.You have a lot of confidence in the ability of compilers to work magic .
The average compiler is actually pretty bad at all this stuff , and you can forget it on any slightly novel architecture ( e.g .
PPC vs x86 ) .
Knowing how to do it yourself is still useful - or essential , depending on what you 're trying to do .
Every compiler I 've used recently has bugs in it regarding floating-point re-ordering anyway ( and I mean * every * compiler - gcc 4 is even worse than MSVC at this ) so do n't lean on this too heavily .
Remember what you said about CORRECT code ?
It 's a lot easier to tweak a flag than it is to robustly test that the algorithm is still working properly .</tokentext>
<sentencetext>Thanks for rolling out the received wisdom.
Like most received wisdom, it is of course wrong, but applies in enough cases and for enough people to be useful.What you must bear is mind though is that something like Amdahl's law applies to hotspot optimization.
Even if you make the hotspot take zero time, the speed of your code is still limited by the performance of the non-hotspots.
This leads to a phenomenon I name "uniformly slow code".If your mission is actually to make fast code you need to start from scratch, and create a perfect algorithm, given the hardware constraints.
But this is too hard, and too much work in most cases to be worthwhile, hence the hotspot advice.As for "your time is better off writing correct code" ... well, if you can't write correct code, go become an actor or something.
This is a pre-requisite, not a target.You have a lot of confidence in the ability of compilers to work magic.
The average compiler is actually pretty bad at all this stuff, and you can forget it on any slightly novel architecture (e.g.
PPC vs x86).
Knowing how to do it yourself is still useful - or essential, depending on what you're trying to do.
Every compiler I've used recently has bugs in it regarding floating-point re-ordering anyway (and I mean *every* compiler - gcc 4 is even worse than MSVC at this) so don't lean on this too heavily.
Remember what you said about CORRECT code?
It's a lot easier to tweak a flag than it is to robustly test that the algorithm is still working properly.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774060</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774198</id>
	<title>Re:Premature optimization is evil... and stupid</title>
	<author>Anonymous</author>
	<datestamp>1263480900000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>A profiler is only one way of determining what's important. Knowledge and experience is another way. If you've worked in a problem domain for a long time, you know the fundamentals of what's fast enough and what's not acceptable. Utilizing a large store of knowledge to "prematurely" optimize is not necessarily a bad thing. If you know for certain that the clearest, easiest-to-understand way of doing something just isn't going to be fast enough, that's perfectly valid. Just balance that against the clarity of the resulting solution.</p><p>These sorts of things generally fall into the realm of algorithmic design, though, not micro-optimizations like substituting shifts for multiplications. The compiler figures all that shit out for you, anyway.</p></htmltext>
<tokenext>A profiler is only one way of determining what 's important .
Knowledge and experience is another way .
If you 've worked in a problem domain for a long time , you know the fundamentals of what 's fast enough and what 's not acceptable .
Utilizing a large store of knowledge to " prematurely " optimize is not necessarily a bad thing .
If you know for certain that the clearest , easiest-to-understand way of doing something just is n't going to be fast enough , that 's perfectly valid .
Just balance that against the clarity of the resulting solution.These sorts of things generally fall into the realm of algorithmic design , though , not micro-optimizations like substituting shifts for multiplications .
The compiler figures all that shit out for you , anyway .</tokentext>
<sentencetext>A profiler is only one way of determining what's important.
Knowledge and experience is another way.
If you've worked in a problem domain for a long time, you know the fundamentals of what's fast enough and what's not acceptable.
Utilizing a large store of knowledge to "prematurely" optimize is not necessarily a bad thing.
If you know for certain that the clearest, easiest-to-understand way of doing something just isn't going to be fast enough, that's perfectly valid.
Just balance that against the clarity of the resulting solution.These sorts of things generally fall into the realm of algorithmic design, though, not micro-optimizations like substituting shifts for multiplications.
The compiler figures all that shit out for you, anyway.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772742</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774870</id>
	<title>Re:Code in high-level</title>
	<author>KC1P</author>
	<datestamp>1263486180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I use tons of macros in my assembly code but the main reason I do is because present-day assemblers provide a GREAT macro language -- MUCH better than what C has.  So while macros can also be used to package purely rote operations, they're also great for code which contains lots of assembly-time checks and will do the right thing if constants change in the headers, or whatever.</p><p>If you think JMPing is tedious then that just means you're not an assembly programmer, which is fine.  The classic problem is for HLL programmers to think in their native HLL and then try to translate HLL operations into assembly code.  So it makes sense for HLL programmers to chafe at all the JMPs and want to wrap things up to look like REPEAT-UNTIL, or DO, or whatever.  Assembly programmers are used to thinking in tiny steps so it makes perfect sense -- ask a question and then branch based on the answer.  That's how the computer works so that's what I have to tell it.  Since there's no stigma on JMPing (it's the only way to do anything anyway) we embrace it and learn how to do it well (it's not always spaghetti code just because it won't lie flat on the page).</p><p>I definitely agree that having a clear grasp of what you're doing and when beats any amount of low-level tweaking.  But you can have both!  And all those 2\% speedups really start to add up if you do them all the time.  But of course the real motivation is the same as for anyone else -- C programmers use C because they like C.  I use assembly because I like assembly.  I have all kinds of justifications (just like a C programmer does) and some of them are right (as are some for C) but mostly I just love it, and almost anything else makes my skin crawl.</p></htmltext>
<tokenext>I use tons of macros in my assembly code but the main reason I do is because present-day assemblers provide a GREAT macro language -- MUCH better than what C has .
So while macros can also be used to package purely rote operations , they 're also great for code which contains lots of assembly-time checks and will do the right thing if constants change in the headers , or whatever.If you think JMPing is tedious then that just means you 're not an assembly programmer , which is fine .
The classic problem is for HLL programmers to think in their native HLL and then try to translate HLL operations into assembly code .
So it makes sense for HLL programmers to chafe at all the JMPs and want to wrap things up to look like REPEAT-UNTIL , or DO , or whatever .
Assembly programmers are used to thinking in tiny steps so it makes perfect sense -- ask a question and then branch based on the answer .
That 's how the computer works so that 's what I have to tell it .
Since there 's no stigma on JMPing ( it 's the only way to do anything anyway ) we embrace it and learn how to do it well ( it 's not always spaghetti code just because it wo n't lie flat on the page ) .I definitely agree that having a clear grasp of what you 're doing and when beats any amount of low-level tweaking .
But you can have both !
And all those 2 \ % speedups really start to add up if you do them all the time .
But of course the real motivation is the same as for anyone else -- C programmers use C because they like C. I use assembly because I like assembly .
I have all kinds of justifications ( just like a C programmer does ) and some of them are right ( as are some for C ) but mostly I just love it , and almost anything else makes my skin crawl .</tokentext>
<sentencetext>I use tons of macros in my assembly code but the main reason I do is because present-day assemblers provide a GREAT macro language -- MUCH better than what C has.
So while macros can also be used to package purely rote operations, they're also great for code which contains lots of assembly-time checks and will do the right thing if constants change in the headers, or whatever.If you think JMPing is tedious then that just means you're not an assembly programmer, which is fine.
The classic problem is for HLL programmers to think in their native HLL and then try to translate HLL operations into assembly code.
So it makes sense for HLL programmers to chafe at all the JMPs and want to wrap things up to look like REPEAT-UNTIL, or DO, or whatever.
Assembly programmers are used to thinking in tiny steps so it makes perfect sense -- ask a question and then branch based on the answer.
That's how the computer works so that's what I have to tell it.
Since there's no stigma on JMPing (it's the only way to do anything anyway) we embrace it and learn how to do it well (it's not always spaghetti code just because it won't lie flat on the page).I definitely agree that having a clear grasp of what you're doing and when beats any amount of low-level tweaking.
But you can have both!
And all those 2\% speedups really start to add up if you do them all the time.
But of course the real motivation is the same as for anyone else -- C programmers use C because they like C.  I use assembly because I like assembly.
I have all kinds of justifications (just like a C programmer does) and some of them are right (as are some for C) but mostly I just love it, and almost anything else makes my skin crawl.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773982</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775346</id>
	<title>Re:Code in high-level</title>
	<author>Anonymous</author>
	<datestamp>1263490920000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Or even better... use a C compiler that supports inline assembly (Id say most, if not all C compilers).<br>Use and abuse the generate asm of your C compiler (prefer a compiler that allows you to that with optimization turned on).</p><p>This also allows you to learn how the compiler generates code. If you are fluent with assembly you will be able to spot where the C compiler doesnt optimize as it should (the generated code is not what you expect). In many cases it is due to some language rules where by providing some extra information to the compiler you can get better generated code (restrict et all). In other cases rearranging your C code so that the compiler generates what you want. Throw in intrinsics and you can get really close to asm performance without the tedious parts of asm.<br>I personally find it funnier to trick a C compiler to actually generate the asm I want that asm coding. The code is also more portable and easier to understand to outsiders (and to me after a month).</p></htmltext>
<tokenext>Or even better... use a C compiler that supports inline assembly ( Id say most , if not all C compilers ) .Use and abuse the generate asm of your C compiler ( prefer a compiler that allows you to that with optimization turned on ) .This also allows you to learn how the compiler generates code .
If you are fluent with assembly you will be able to spot where the C compiler doesnt optimize as it should ( the generated code is not what you expect ) .
In many cases it is due to some language rules where by providing some extra information to the compiler you can get better generated code ( restrict et all ) .
In other cases rearranging your C code so that the compiler generates what you want .
Throw in intrinsics and you can get really close to asm performance without the tedious parts of asm.I personally find it funnier to trick a C compiler to actually generate the asm I want that asm coding .
The code is also more portable and easier to understand to outsiders ( and to me after a month ) .</tokentext>
<sentencetext>Or even better... use a C compiler that supports inline assembly (Id say most, if not all C compilers).Use and abuse the generate asm of your C compiler (prefer a compiler that allows you to that with optimization turned on).This also allows you to learn how the compiler generates code.
If you are fluent with assembly you will be able to spot where the C compiler doesnt optimize as it should (the generated code is not what you expect).
In many cases it is due to some language rules where by providing some extra information to the compiler you can get better generated code (restrict et all).
In other cases rearranging your C code so that the compiler generates what you want.
Throw in intrinsics and you can get really close to asm performance without the tedious parts of asm.I personally find it funnier to trick a C compiler to actually generate the asm I want that asm coding.
The code is also more portable and easier to understand to outsiders (and to me after a month).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773440</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773966</id>
	<title>Another fascinating Click talk</title>
	<author>Anonymous</author>
	<datestamp>1263479160000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><a href="http://video.google.com/videoplay?docid=2139967204534450862" title="google.com" rel="nofollow">A Lock-Free Hash Table</a> [google.com]</p></htmltext>
<tokenext>A Lock-Free Hash Table [ google.com ]</tokentext>
<sentencetext>A Lock-Free Hash Table [google.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773092</id>
	<title>Re:Code in high-level</title>
	<author>dave562</author>
	<datestamp>1263473880000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>I think it depends on what kind of code you're trying to write.  If a person desires to write applications then you are right, they might as well write it in a high level language and let the compiler do the work.  On the other hand if the person is interested in vulnerability research or security work, then learning ASM might as well be considered a requisite.  An understanding of low level programming and code execution provides a programmer with a solid foundation.  It gives the potential insights into what might be going wrong when their code isn't compiling or executing the way they want it to.  It also gives them the tools to make their code better, as opposed to simply shrugging and saying, "I sure hope they fix this damn compiler..."</p></htmltext>
<tokenext>I think it depends on what kind of code you 're trying to write .
If a person desires to write applications then you are right , they might as well write it in a high level language and let the compiler do the work .
On the other hand if the person is interested in vulnerability research or security work , then learning ASM might as well be considered a requisite .
An understanding of low level programming and code execution provides a programmer with a solid foundation .
It gives the potential insights into what might be going wrong when their code is n't compiling or executing the way they want it to .
It also gives them the tools to make their code better , as opposed to simply shrugging and saying , " I sure hope they fix this damn compiler... "</tokentext>
<sentencetext>I think it depends on what kind of code you're trying to write.
If a person desires to write applications then you are right, they might as well write it in a high level language and let the compiler do the work.
On the other hand if the person is interested in vulnerability research or security work, then learning ASM might as well be considered a requisite.
An understanding of low level programming and code execution provides a programmer with a solid foundation.
It gives the potential insights into what might be going wrong when their code isn't compiling or executing the way they want it to.
It also gives them the tools to make their code better, as opposed to simply shrugging and saying, "I sure hope they fix this damn compiler..."</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774672</id>
	<title>Re:Code in high-level</title>
	<author>ChrisMaple</author>
	<datestamp>1263484560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Not many compilers are aware of the video extensions (SSE, etc.), nor are they able to turn even simple loops into code using those parallel extensions. Speedups of 2X, 3X, or more are possible in certain cases.</htmltext>
<tokenext>Not many compilers are aware of the video extensions ( SSE , etc .
) , nor are they able to turn even simple loops into code using those parallel extensions .
Speedups of 2X , 3X , or more are possible in certain cases .</tokentext>
<sentencetext>Not many compilers are aware of the video extensions (SSE, etc.
), nor are they able to turn even simple loops into code using those parallel extensions.
Speedups of 2X, 3X, or more are possible in certain cases.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773982</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775120</id>
	<title>Re:Code in high-level</title>
	<author>Anonymous</author>
	<datestamp>1263488460000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>that's why i majored in computer engineering. not because i wanted to engineer computers. but because compsci focuses on higher level aspects of software. and now i'm unemployable.</p></htmltext>
<tokenext>that 's why i majored in computer engineering .
not because i wanted to engineer computers .
but because compsci focuses on higher level aspects of software .
and now i 'm unemployable .</tokentext>
<sentencetext>that's why i majored in computer engineering.
not because i wanted to engineer computers.
but because compsci focuses on higher level aspects of software.
and now i'm unemployable.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772784</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773502</id>
	<title>Video is a waste of time...</title>
	<author>Anonymous</author>
	<datestamp>1263476160000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I can't even watch this.  Anyone got a transcript so that I can skip the video BS and just read it?  I can read a lot faster than he can talk, and I wouldn't have to wait 30 minutes for the video to load (slow connection)<nobr> <wbr></nobr>...</p></htmltext>
<tokenext>I ca n't even watch this .
Anyone got a transcript so that I can skip the video BS and just read it ?
I can read a lot faster than he can talk , and I would n't have to wait 30 minutes for the video to load ( slow connection ) .. .</tokentext>
<sentencetext>I can't even watch this.
Anyone got a transcript so that I can skip the video BS and just read it?
I can read a lot faster than he can talk, and I wouldn't have to wait 30 minutes for the video to load (slow connection) ...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772618</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773894</id>
	<title>Re:Code in high-level</title>
	<author>Anonymous</author>
	<datestamp>1263478440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>I wanted to take ASM in college. I was the only student who showed up for the class and the class was canceled. Since most of the programming classes was Java-centric, no one wanted to get their hands dirty under the hood.</p></div><p>I did an EE and we had to learn ASM for some embedded courses (6811, PIC). Learning it for "larger" processors is certainly possible, but you could always get a hobby kit and learn it. We also had some courses in VHDL to design simple CPUs and VGA emulators (e.g., had to program an FPGA to display certain patterns on a CRT).</p><p>I'm guessing you were a CS, and they didn't really go down into hardware was much as in comp. eng. or EE.</p></div>
	</htmltext>
<tokenext>I wanted to take ASM in college .
I was the only student who showed up for the class and the class was canceled .
Since most of the programming classes was Java-centric , no one wanted to get their hands dirty under the hood.I did an EE and we had to learn ASM for some embedded courses ( 6811 , PIC ) .
Learning it for " larger " processors is certainly possible , but you could always get a hobby kit and learn it .
We also had some courses in VHDL to design simple CPUs and VGA emulators ( e.g. , had to program an FPGA to display certain patterns on a CRT ) .I 'm guessing you were a CS , and they did n't really go down into hardware was much as in comp .
eng. or EE .</tokentext>
<sentencetext>I wanted to take ASM in college.
I was the only student who showed up for the class and the class was canceled.
Since most of the programming classes was Java-centric, no one wanted to get their hands dirty under the hood.I did an EE and we had to learn ASM for some embedded courses (6811, PIC).
Learning it for "larger" processors is certainly possible, but you could always get a hobby kit and learn it.
We also had some courses in VHDL to design simple CPUs and VGA emulators (e.g., had to program an FPGA to display certain patterns on a CRT).I'm guessing you were a CS, and they didn't really go down into hardware was much as in comp.
eng. or EE.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772784</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775700</id>
	<title>Re:Code in high-level</title>
	<author>Anonymous</author>
	<datestamp>1263495300000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Don't worry, help is always around the corner.</p><p>Google "Flat Assembly" to get started.</p></htmltext>
<tokenext>Do n't worry , help is always around the corner.Google " Flat Assembly " to get started .</tokentext>
<sentencetext>Don't worry, help is always around the corner.Google "Flat Assembly" to get started.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772784</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30778684</id>
	<title>C++ in Quake 3?</title>
	<author>Anonymous</author>
	<datestamp>1263569700000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I'm sorry, where exactly did you find 'C++ methods' in Quake 3 code?</p></htmltext>
<tokenext>I 'm sorry , where exactly did you find 'C + + methods ' in Quake 3 code ?</tokentext>
<sentencetext>I'm sorry, where exactly did you find 'C++ methods' in Quake 3 code?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30776028</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773884</id>
	<title>Re:Premature optimization is evil... and stupid</title>
	<author>Just Some Guy</author>
	<datestamp>1263478380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>so there's no reason to explicitly use shift in code (unless you're doing bit manipulation</p></div><p>Well, right. The general advice is to always write what you actually want the compiler to do and not how to do it, unless you have specific proof that the compiler's not optimizing it well.</p></div>
	</htmltext>
<tokenext>so there 's no reason to explicitly use shift in code ( unless you 're doing bit manipulationWell , right .
The general advice is to always write what you actually want the compiler to do and not how to do it , unless you have specific proof that the compiler 's not optimizing it well .</tokentext>
<sentencetext>so there's no reason to explicitly use shift in code (unless you're doing bit manipulationWell, right.
The general advice is to always write what you actually want the compiler to do and not how to do it, unless you have specific proof that the compiler's not optimizing it well.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773114</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772784</id>
	<title>Re:Code in high-level</title>
	<author>Anonymous</author>
	<datestamp>1263472440000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext>I wanted to take ASM in college. I was the only student who showed up for the class and the class was canceled. Since most of the programming classes was Java-centric, no one wanted to get their hands dirty under the hood.</htmltext>
<tokenext>I wanted to take ASM in college .
I was the only student who showed up for the class and the class was canceled .
Since most of the programming classes was Java-centric , no one wanted to get their hands dirty under the hood .</tokentext>
<sentencetext>I wanted to take ASM in college.
I was the only student who showed up for the class and the class was canceled.
Since most of the programming classes was Java-centric, no one wanted to get their hands dirty under the hood.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30777868</id>
	<title>Re:I hate flash video</title>
	<author>Quantumstate</author>
	<datestamp>1263565080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I use it and I find that it is buggy on my machine.  I don't know about other people but mine random stops working from time to time and needs a browser restart.</p></htmltext>
<tokenext>I use it and I find that it is buggy on my machine .
I do n't know about other people but mine random stops working from time to time and needs a browser restart .</tokentext>
<sentencetext>I use it and I find that it is buggy on my machine.
I don't know about other people but mine random stops working from time to time and needs a browser restart.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774820</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774564
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772784
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_49</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30777868
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774820
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773556
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774630
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773114
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772742
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774408
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772784
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30776222
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773440
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772784
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772946
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772742
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773894
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772784
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_48</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30777016
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774060
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774632
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774118
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772742
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773470
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775346
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773440
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772784
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_38</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774698
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773896
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772742
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774950
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774808
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_43</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30789170
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773542
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30777336
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772944
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772618
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775414
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_35</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30776696
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774060
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30777476
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772742
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_42</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773134
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772944
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772618
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773070
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772742
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_41</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775056
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774808
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773974
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772742
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773884
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773114
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772742
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775740
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772784
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775120
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772784
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772964
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772780
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773342
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_47</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773208
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772780
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_40</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772820
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774308
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773982
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772784
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774296
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775772
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773272
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772784
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30776686
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774808
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773502
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772618
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_39</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774198
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772742
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772736
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30827362
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30777024
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30776028
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774118
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772742
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_46</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774746
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774118
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772742
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774672
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773982
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772784
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774784
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772618
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_45</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30779476
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773114
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772742
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_36</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30792136
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774060
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30783150
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772618
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30787296
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775570
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_37</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30778684
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30776028
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774118
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772742
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30780062
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775570
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_44</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30776624
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774060
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775700
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772784
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774870
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773982
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772784
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_239229_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773092
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_239229.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774060
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30792136
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30777016
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30776696
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30776624
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_239229.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772640
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772784
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775120
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773272
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775772
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775700
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773440
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775346
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30776222
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773894
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773982
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774870
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774672
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774308
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774408
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774564
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775740
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773342
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772820
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773092
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775414
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774296
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773470
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772736
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_239229.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772638
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_239229.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773556
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774820
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30777868
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_239229.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775570
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30787296
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30780062
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_239229.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773542
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30789170
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_239229.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772742
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773114
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774630
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773884
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30779476
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774118
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774632
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774746
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30776028
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30778684
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30777024
----http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30827362
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30777476
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772946
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774198
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773974
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773070
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773896
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774698
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_239229.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774808
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30776686
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775056
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774950
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_239229.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30775344
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_239229.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772780
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773208
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772964
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_239229.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772618
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773502
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30783150
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30774784
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30772944
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30777336
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_239229.30773134
</commentlist>
</conversation>
