<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article10_02_10_1432247</id>
	<title>The Art of Unit Testing</title>
	<author>samzenpus</author>
	<datestamp>1265829900000</datestamp>
	<htmltext>FrazzledDad writes <i>"'We let the tests we wrote do more harm than good.' That snippet from the preface of Roy Osherove's <em>The Art of Unit Testing with Examples in .NET</em> (AOUT hereafter) is the wrap up of a frank description of a failed project Osherove was part of. The goal of AOUT is teaching you great approaches to unit testing so you won't run into similar failures on your own projects."</i> Keep reading for the rest of FrazzledDad's review.</htmltext>
<tokenext>FrazzledDad writes " 'We let the tests we wrote do more harm than good .
' That snippet from the preface of Roy Osherove 's The Art of Unit Testing with Examples in .NET ( AOUT hereafter ) is the wrap up of a frank description of a failed project Osherove was part of .
The goal of AOUT is teaching you great approaches to unit testing so you wo n't run into similar failures on your own projects .
" Keep reading for the rest of FrazzledDad 's review .</tokentext>
<sentencetext>FrazzledDad writes "'We let the tests we wrote do more harm than good.
' That snippet from the preface of Roy Osherove's The Art of Unit Testing with Examples in .NET (AOUT hereafter) is the wrap up of a frank description of a failed project Osherove was part of.
The goal of AOUT is teaching you great approaches to unit testing so you won't run into similar failures on your own projects.
" Keep reading for the rest of FrazzledDad's review.</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31097072</id>
	<title>Re:Unit testing is not a silver bullet</title>
	<author>iangoldby</author>
	<datestamp>1265879940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>simplistic</p></div></blockquote><p> I think you mean <em>simple</em>, or perhaps <em>very simple</em>. <em>Simplistic</em> means <em> <strong>too</strong> simple</em> or <em> <strong>over</strong>-simplified</em>. If your unit tests are simplistic then thay are not adequate for the job.</p></div>
	</htmltext>
<tokenext>simplistic I think you mean simple , or perhaps very simple .
Simplistic means too simple or over-simplified .
If your unit tests are simplistic then thay are not adequate for the job .</tokentext>
<sentencetext>simplistic I think you mean simple, or perhaps very simple.
Simplistic means  too simple or  over-simplified.
If your unit tests are simplistic then thay are not adequate for the job.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090678</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31097736</id>
	<title>Re:Unit testing is not a silver bullet</title>
	<author>Rathkan</author>
	<datestamp>1265888640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I don't think you understand the problem the parent described. Unit tests can't help you to diagnose multi-threaded and time-related issues. When you have a bug which only reproduces in the wild once every 3 months, just saying "unit tests" won't allow you to reproduce the bug, fix it and add the test to reproduce this bug to regression. At best you need to create the tools to reproduce the bug yourself, and with certain systems and certain bugs, this can be far from trivial to develop.

Multi-threaded and real-time systems are a whole different kettle of fish than basic class design and testing.</htmltext>
<tokenext>I do n't think you understand the problem the parent described .
Unit tests ca n't help you to diagnose multi-threaded and time-related issues .
When you have a bug which only reproduces in the wild once every 3 months , just saying " unit tests " wo n't allow you to reproduce the bug , fix it and add the test to reproduce this bug to regression .
At best you need to create the tools to reproduce the bug yourself , and with certain systems and certain bugs , this can be far from trivial to develop .
Multi-threaded and real-time systems are a whole different kettle of fish than basic class design and testing .</tokentext>
<sentencetext>I don't think you understand the problem the parent described.
Unit tests can't help you to diagnose multi-threaded and time-related issues.
When you have a bug which only reproduces in the wild once every 3 months, just saying "unit tests" won't allow you to reproduce the bug, fix it and add the test to reproduce this bug to regression.
At best you need to create the tools to reproduce the bug yourself, and with certain systems and certain bugs, this can be far from trivial to develop.
Multi-threaded and real-time systems are a whole different kettle of fish than basic class design and testing.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090678</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31103380</id>
	<title>Don't lose sight of the goal</title>
	<author>CyberLife</author>
	<datestamp>1265920140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I think many are getting caught up in terminology and forgetting (or perhaps they never knew) that the overall general purpose of any testing is to eliminate assumptions. Do the requirements really reflect what the customer wants? Does the system really meet the specs? Does component X really perform its job? Or are these things just assumed to be true? Testing gives one the ability to find out.</p><p>Now the decision of what to test and what to ignore is an important one, and ultimately it comes down to recognizing one's assumptions. What is one willing to assume, and what must they really know for certain?</p></htmltext>
<tokenext>I think many are getting caught up in terminology and forgetting ( or perhaps they never knew ) that the overall general purpose of any testing is to eliminate assumptions .
Do the requirements really reflect what the customer wants ?
Does the system really meet the specs ?
Does component X really perform its job ?
Or are these things just assumed to be true ?
Testing gives one the ability to find out.Now the decision of what to test and what to ignore is an important one , and ultimately it comes down to recognizing one 's assumptions .
What is one willing to assume , and what must they really know for certain ?</tokentext>
<sentencetext>I think many are getting caught up in terminology and forgetting (or perhaps they never knew) that the overall general purpose of any testing is to eliminate assumptions.
Do the requirements really reflect what the customer wants?
Does the system really meet the specs?
Does component X really perform its job?
Or are these things just assumed to be true?
Testing gives one the ability to find out.Now the decision of what to test and what to ignore is an important one, and ultimately it comes down to recognizing one's assumptions.
What is one willing to assume, and what must they really know for certain?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089716</id>
	<title>cOM3E ON poeple</title>
	<author>For a Free Internet</author>
	<datestamp>1265056680000</datestamp>
	<modclass>None</modclass>
	<modscore>-1</modscore>
	<htmltext><p>HAT is everybodu  talking about I am totally like totally! And the unite of thiswas with the gonhoreea in monkey shit over cages with tiw thwi really its apples na doranges what ou say ijn Biurma 9u5oijrk  insightful rhuekj  penois</p></htmltext>
<tokenext>HAT is everybodu talking about I am totally like totally !
And the unite of thiswas with the gonhoreea in monkey shit over cages with tiw thwi really its apples na doranges what ou say ijn Biurma 9u5oijrk insightful rhuekj penois</tokentext>
<sentencetext>HAT is everybodu  talking about I am totally like totally!
And the unite of thiswas with the gonhoreea in monkey shit over cages with tiw thwi really its apples na doranges what ou say ijn Biurma 9u5oijrk  insightful rhuekj  penois</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090370</id>
	<title>Re:Error coding...</title>
	<author>Anonymous</author>
	<datestamp>1265017260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Fortunately, modern concepts like exceptions have eliminated the need for steps 2 through 5.  It is very annoying to go back to old code and see:<br><tt><br>return\_value = DoStep1();<br>if (return\_value != success) then handle\_error(return\_Value);<br>return\_value = DoStep2();<br>if (return\_value != success) then handle\_error(return\_Value);<br>return\_value = DoStep3();<br>if (return\_value != success) then handle\_error(return\_Value);<br>.<br>.<br>.<br></tt><br>or worse:<br><tt><br>return\_value = DoStep1();<br>if (return\_value == success)<br>{<br>
&nbsp; &nbsp; &nbsp; return\_value = DoStep2()<br>
&nbsp; &nbsp; &nbsp; if (return\_Value == success)<nobr> <wbr></nobr>... And so on, indented to the 500th column...<br>}</tt></p><p>instead of:</p><p><tt>DoStep1();<br>DoStep2();<br>DoStep3();<br>upon failure, handle\_error.</tt></p></htmltext>
<tokenext>Fortunately , modern concepts like exceptions have eliminated the need for steps 2 through 5 .
It is very annoying to go back to old code and see : return \ _value = DoStep1 ( ) ; if ( return \ _value ! = success ) then handle \ _error ( return \ _Value ) ; return \ _value = DoStep2 ( ) ; if ( return \ _value ! = success ) then handle \ _error ( return \ _Value ) ; return \ _value = DoStep3 ( ) ; if ( return \ _value ! = success ) then handle \ _error ( return \ _Value ) ; ...or worse : return \ _value = DoStep1 ( ) ; if ( return \ _value = = success ) {       return \ _value = DoStep2 ( )       if ( return \ _Value = = success ) ... And so on , indented to the 500th column... } instead of : DoStep1 ( ) ; DoStep2 ( ) ; DoStep3 ( ) ; upon failure , handle \ _error .</tokentext>
<sentencetext>Fortunately, modern concepts like exceptions have eliminated the need for steps 2 through 5.
It is very annoying to go back to old code and see:return\_value = DoStep1();if (return\_value != success) then handle\_error(return\_Value);return\_value = DoStep2();if (return\_value != success) then handle\_error(return\_Value);return\_value = DoStep3();if (return\_value != success) then handle\_error(return\_Value);...or worse:return\_value = DoStep1();if (return\_value == success){
      return\_value = DoStep2()
      if (return\_Value == success) ... And so on, indented to the 500th column...}instead of:DoStep1();DoStep2();DoStep3();upon failure, handle\_error.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089958</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090522</id>
	<title>Re:It's not art, it's basic engineering</title>
	<author>msclrhd</author>
	<datestamp>1265018100000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>When testing a system, if you cannot put a given component under test (or do so by "faking" its dependants -- e.g. by the things that talk to the database) then the architecture is wrong.</p><p>I strive never to have any "fake" parts of the system in a test. It makes it harder to maintain (e.g. changing some of the real components will break the tests). You cannot easily change the data you are testing with, or having a method generate an error for a specific test. You are also not really testing the proper code; not all of it, at any rate.</p><p>You should implement interfaces at the interface boundaries, and have it so that the code under test can be given different implementations of that interface. This means that you don't need to fake any part of your codebase -- you are testing it with different data and/or interface behaviours (e.g. exceptions) that are designed to exercise the code under test. The code under test should not need modification in order to run (aside from re-architecturing the system to make it testable).</p><p>The main goal of testing is to have the maximum coverage of the code possible to ensure that any changes to the code don't change expected behaviour or cause bugs. Ideally, when a bug is found in manual testing, it should be possible to add a test case for that bug so that it can be verified and so that future work will not re-introduce that bug.</p><p>Start where you can. If you have a large project, put the code that you are working on under test first to verify the existing behaviour. This also works as an exploratory phase for code that you don't fully understand.</p><p>Also remember that tests should form part of the documentation. They are useful for verifying an interface contract (does a method accept a null string when the contract says it does? does the foo object always exist like the document says it does?)</p></htmltext>
<tokenext>When testing a system , if you can not put a given component under test ( or do so by " faking " its dependants -- e.g .
by the things that talk to the database ) then the architecture is wrong.I strive never to have any " fake " parts of the system in a test .
It makes it harder to maintain ( e.g .
changing some of the real components will break the tests ) .
You can not easily change the data you are testing with , or having a method generate an error for a specific test .
You are also not really testing the proper code ; not all of it , at any rate.You should implement interfaces at the interface boundaries , and have it so that the code under test can be given different implementations of that interface .
This means that you do n't need to fake any part of your codebase -- you are testing it with different data and/or interface behaviours ( e.g .
exceptions ) that are designed to exercise the code under test .
The code under test should not need modification in order to run ( aside from re-architecturing the system to make it testable ) .The main goal of testing is to have the maximum coverage of the code possible to ensure that any changes to the code do n't change expected behaviour or cause bugs .
Ideally , when a bug is found in manual testing , it should be possible to add a test case for that bug so that it can be verified and so that future work will not re-introduce that bug.Start where you can .
If you have a large project , put the code that you are working on under test first to verify the existing behaviour .
This also works as an exploratory phase for code that you do n't fully understand.Also remember that tests should form part of the documentation .
They are useful for verifying an interface contract ( does a method accept a null string when the contract says it does ?
does the foo object always exist like the document says it does ?
)</tokentext>
<sentencetext>When testing a system, if you cannot put a given component under test (or do so by "faking" its dependants -- e.g.
by the things that talk to the database) then the architecture is wrong.I strive never to have any "fake" parts of the system in a test.
It makes it harder to maintain (e.g.
changing some of the real components will break the tests).
You cannot easily change the data you are testing with, or having a method generate an error for a specific test.
You are also not really testing the proper code; not all of it, at any rate.You should implement interfaces at the interface boundaries, and have it so that the code under test can be given different implementations of that interface.
This means that you don't need to fake any part of your codebase -- you are testing it with different data and/or interface behaviours (e.g.
exceptions) that are designed to exercise the code under test.
The code under test should not need modification in order to run (aside from re-architecturing the system to make it testable).The main goal of testing is to have the maximum coverage of the code possible to ensure that any changes to the code don't change expected behaviour or cause bugs.
Ideally, when a bug is found in manual testing, it should be possible to add a test case for that bug so that it can be verified and so that future work will not re-introduce that bug.Start where you can.
If you have a large project, put the code that you are working on under test first to verify the existing behaviour.
This also works as an exploratory phase for code that you don't fully understand.Also remember that tests should form part of the documentation.
They are useful for verifying an interface contract (does a method accept a null string when the contract says it does?
does the foo object always exist like the document says it does?
)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089888</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31098542</id>
	<title>Re:Wrong Unit</title>
	<author>Civil\_Disobedient</author>
	<datestamp>1265898060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>That's funny, when I saw the book cover thumbnail, I thought it was a picture of a <a href="http://weirdscifi.ratiosemper.com/drwho/timelords.html" title="ratiosemper.com">timelord</a> [ratiosemper.com], then realized there was no ceremonial headpiece, and thought it must be a Gallifrey Citadel Guard.</p><p>Apparently I am a giant hulking geek.</p></htmltext>
<tokenext>That 's funny , when I saw the book cover thumbnail , I thought it was a picture of a timelord [ ratiosemper.com ] , then realized there was no ceremonial headpiece , and thought it must be a Gallifrey Citadel Guard.Apparently I am a giant hulking geek .</tokentext>
<sentencetext>That's funny, when I saw the book cover thumbnail, I thought it was a picture of a timelord [ratiosemper.com], then realized there was no ceremonial headpiece, and thought it must be a Gallifrey Citadel Guard.Apparently I am a giant hulking geek.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089674</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31091690</id>
	<title>shi7!</title>
	<author>Anonymous</author>
	<datestamp>1265022660000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext>thing for The BSD has always</htmltext>
<tokenext>thing for The BSD has always</tokentext>
<sentencetext>thing for The BSD has always</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31093892</id>
	<title>Re:Does he back up anything he says</title>
	<author>wrook</author>
	<datestamp>1265033880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The problem with measuring the effectiveness of programming techniques is that it is very difficult.  It is quite valid to say that there are few studies to back up the effectiveness of various "agile" techniques.  But I will point out that this is true of every programming technique.</p><p>The problem with measuring this is that it is impossible to get a baseline.  There is a huge difference in productivity based simply on individual talent.  This has been shown.  So you will need thousands of programmers to test any theory.  Problems are also extremely variable, so it is difficult to measure productivity across different problems.  You would need to solve hundreds of non-trivial problems to test your theories.  Finally, objective code quality is an unknown.  Existing metrics are well known to be bad at estimating real quality.  Solving any one of these measurement problems would be enough to get you a PhD.</p><p>If someone could find a good way to test different techniques and provide statistically significant results, they would be rich beyond the dreams of avarice.  You use a variety of different techniques, which I assume you feel are more effective (or at least as effective) as others.  Check the literature.  Do you have any proof other than your own (or other's) anecdotal experience to back up your opinions?</p><p>Unfortunately, with the current state of affairs, we are very vulnerable to methodology snake oil salesmen.  Everybody wants the cheap cure-all.  Every popular methodology has more than its fair share of such leeches.  As soon as it becomes a buzz, somebody wants to make a buck off it.  The truth is that there are a lot of individual techniques that are effective, but you are going to have to put effort in to evaluate them yourself.  Try to keep an open mind and keep several hours a week available for training and exploring these possibilities.  You won't be sorry.</p></htmltext>
<tokenext>The problem with measuring the effectiveness of programming techniques is that it is very difficult .
It is quite valid to say that there are few studies to back up the effectiveness of various " agile " techniques .
But I will point out that this is true of every programming technique.The problem with measuring this is that it is impossible to get a baseline .
There is a huge difference in productivity based simply on individual talent .
This has been shown .
So you will need thousands of programmers to test any theory .
Problems are also extremely variable , so it is difficult to measure productivity across different problems .
You would need to solve hundreds of non-trivial problems to test your theories .
Finally , objective code quality is an unknown .
Existing metrics are well known to be bad at estimating real quality .
Solving any one of these measurement problems would be enough to get you a PhD.If someone could find a good way to test different techniques and provide statistically significant results , they would be rich beyond the dreams of avarice .
You use a variety of different techniques , which I assume you feel are more effective ( or at least as effective ) as others .
Check the literature .
Do you have any proof other than your own ( or other 's ) anecdotal experience to back up your opinions ? Unfortunately , with the current state of affairs , we are very vulnerable to methodology snake oil salesmen .
Everybody wants the cheap cure-all .
Every popular methodology has more than its fair share of such leeches .
As soon as it becomes a buzz , somebody wants to make a buck off it .
The truth is that there are a lot of individual techniques that are effective , but you are going to have to put effort in to evaluate them yourself .
Try to keep an open mind and keep several hours a week available for training and exploring these possibilities .
You wo n't be sorry .</tokentext>
<sentencetext>The problem with measuring the effectiveness of programming techniques is that it is very difficult.
It is quite valid to say that there are few studies to back up the effectiveness of various "agile" techniques.
But I will point out that this is true of every programming technique.The problem with measuring this is that it is impossible to get a baseline.
There is a huge difference in productivity based simply on individual talent.
This has been shown.
So you will need thousands of programmers to test any theory.
Problems are also extremely variable, so it is difficult to measure productivity across different problems.
You would need to solve hundreds of non-trivial problems to test your theories.
Finally, objective code quality is an unknown.
Existing metrics are well known to be bad at estimating real quality.
Solving any one of these measurement problems would be enough to get you a PhD.If someone could find a good way to test different techniques and provide statistically significant results, they would be rich beyond the dreams of avarice.
You use a variety of different techniques, which I assume you feel are more effective (or at least as effective) as others.
Check the literature.
Do you have any proof other than your own (or other's) anecdotal experience to back up your opinions?Unfortunately, with the current state of affairs, we are very vulnerable to methodology snake oil salesmen.
Everybody wants the cheap cure-all.
Every popular methodology has more than its fair share of such leeches.
As soon as it becomes a buzz, somebody wants to make a buck off it.
The truth is that there are a lot of individual techniques that are effective, but you are going to have to put effort in to evaluate them yourself.
Try to keep an open mind and keep several hours a week available for training and exploring these possibilities.
You won't be sorry.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31091002</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31095046</id>
	<title>Re:xUnit Test Patterns</title>
	<author>ocularDeathRay</author>
	<datestamp>1265040420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>For anyone familiar with the basics of unit testing but struggling to implement it in real world scenarios,</p> </div><p>

I think that all slashdot readers fall into this category....</p></div>
	</htmltext>
<tokenext>For anyone familiar with the basics of unit testing but struggling to implement it in real world scenarios , I think that all slashdot readers fall into this category... .</tokentext>
<sentencetext>For anyone familiar with the basics of unit testing but struggling to implement it in real world scenarios, 

I think that all slashdot readers fall into this category....
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089842</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31093416</id>
	<title>Re:It's not art, it's basic engineering</title>
	<author>wrook</author>
	<datestamp>1265031780000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>This is a really good post.  I wish I could moderate you up.  Like some people, I've become less enamoured with the word "test" for unit tests.  It implies that I am trying to find out if the functionality works.  This is obviously part of my effort, but actually it has become less so for me over time.  For me, unit tests are used for telling me when something has changed in the system that needs my attention.  I liken it to a spider's web.  I'm not trying to find all the corner cases or prove that it works in every case.  I want bugs to have a high probability of hitting my web and informing me.  When writing new code I also want to be informed when I make an assumption about existing code that is different from the original author.  I think about my assumptions and try to write unit tests that verify the assumptions.  This often fills out most of my requirements for a "spider's web" since when people start messing with code and break my assumptions, my tests will also break.</p><p>Finally, your point about documentation is extremely good.  A large number of people, even if they are used to writing unit tests, don't understand unit testing as documentation.  I've gone to the extreme of thinking about my tests as being literate programming written in the programming language rather than English.  To this extent, I've embraced BDD and write stories with tests.  For each story that I'm developing, I'll create unit tests that explain how each part of the interface is used.  I then refactor my stories mercilessly over time to maintain a consistent narrative.  However, I often feel like I want a "web" (as in TeX's literate programming tool) tool that will generate my narrative, but will still allow me to view the code as units (which is useful for debugging).</p></htmltext>
<tokenext>This is a really good post .
I wish I could moderate you up .
Like some people , I 've become less enamoured with the word " test " for unit tests .
It implies that I am trying to find out if the functionality works .
This is obviously part of my effort , but actually it has become less so for me over time .
For me , unit tests are used for telling me when something has changed in the system that needs my attention .
I liken it to a spider 's web .
I 'm not trying to find all the corner cases or prove that it works in every case .
I want bugs to have a high probability of hitting my web and informing me .
When writing new code I also want to be informed when I make an assumption about existing code that is different from the original author .
I think about my assumptions and try to write unit tests that verify the assumptions .
This often fills out most of my requirements for a " spider 's web " since when people start messing with code and break my assumptions , my tests will also break.Finally , your point about documentation is extremely good .
A large number of people , even if they are used to writing unit tests , do n't understand unit testing as documentation .
I 've gone to the extreme of thinking about my tests as being literate programming written in the programming language rather than English .
To this extent , I 've embraced BDD and write stories with tests .
For each story that I 'm developing , I 'll create unit tests that explain how each part of the interface is used .
I then refactor my stories mercilessly over time to maintain a consistent narrative .
However , I often feel like I want a " web " ( as in TeX 's literate programming tool ) tool that will generate my narrative , but will still allow me to view the code as units ( which is useful for debugging ) .</tokentext>
<sentencetext>This is a really good post.
I wish I could moderate you up.
Like some people, I've become less enamoured with the word "test" for unit tests.
It implies that I am trying to find out if the functionality works.
This is obviously part of my effort, but actually it has become less so for me over time.
For me, unit tests are used for telling me when something has changed in the system that needs my attention.
I liken it to a spider's web.
I'm not trying to find all the corner cases or prove that it works in every case.
I want bugs to have a high probability of hitting my web and informing me.
When writing new code I also want to be informed when I make an assumption about existing code that is different from the original author.
I think about my assumptions and try to write unit tests that verify the assumptions.
This often fills out most of my requirements for a "spider's web" since when people start messing with code and break my assumptions, my tests will also break.Finally, your point about documentation is extremely good.
A large number of people, even if they are used to writing unit tests, don't understand unit testing as documentation.
I've gone to the extreme of thinking about my tests as being literate programming written in the programming language rather than English.
To this extent, I've embraced BDD and write stories with tests.
For each story that I'm developing, I'll create unit tests that explain how each part of the interface is used.
I then refactor my stories mercilessly over time to maintain a consistent narrative.
However, I often feel like I want a "web" (as in TeX's literate programming tool) tool that will generate my narrative, but will still allow me to view the code as units (which is useful for debugging).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090522</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090642</id>
	<title>Re:Unit testing is not a silver bullet</title>
	<author>ClosedSource</author>
	<datestamp>1265018520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>A lot of the methodologies were designed by people who have experience only in writing MOR (Middle Of the Road) code and in many cases haven't written any production code in years.</p><p>So it's not surprising that it's a bad fit for most specialty projects.</p></htmltext>
<tokenext>A lot of the methodologies were designed by people who have experience only in writing MOR ( Middle Of the Road ) code and in many cases have n't written any production code in years.So it 's not surprising that it 's a bad fit for most specialty projects .</tokentext>
<sentencetext>A lot of the methodologies were designed by people who have experience only in writing MOR (Middle Of the Road) code and in many cases haven't written any production code in years.So it's not surprising that it's a bad fit for most specialty projects.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090310</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090310</id>
	<title>Unit testing is not a silver bullet</title>
	<author>CxDoo</author>
	<datestamp>1265016900000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>I work on distributed real-time software (financial industry) and can tell you that unit tests for components I write are either</p><p>1. trivial to write, therefore useless<br>2. impossible to write, therefore useless</p><p>I find full logging and reliable time synchronization both easier to implement and more useful in tracking bugs and / or design errors in environment I deal with than unit testing.</p><p>tl;dr - check return values, catch exceptions and dump them in your logs (and use state machines so you know where exactly you were, and so on...)</p></htmltext>
<tokenext>I work on distributed real-time software ( financial industry ) and can tell you that unit tests for components I write are either1 .
trivial to write , therefore useless2 .
impossible to write , therefore uselessI find full logging and reliable time synchronization both easier to implement and more useful in tracking bugs and / or design errors in environment I deal with than unit testing.tl ; dr - check return values , catch exceptions and dump them in your logs ( and use state machines so you know where exactly you were , and so on... )</tokentext>
<sentencetext>I work on distributed real-time software (financial industry) and can tell you that unit tests for components I write are either1.
trivial to write, therefore useless2.
impossible to write, therefore uselessI find full logging and reliable time synchronization both easier to implement and more useful in tracking bugs and / or design errors in environment I deal with than unit testing.tl;dr - check return values, catch exceptions and dump them in your logs (and use state machines so you know where exactly you were, and so on...)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31095182</id>
	<title>a.out</title>
	<author>Anonymous</author>
	<datestamp>1265041740000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>A book using<nobr> <wbr></nobr>.NET is being referred to as <a href="http://en.wikipedia.org/wiki/A.out" title="wikipedia.org" rel="nofollow">AOUT?</a> [wikipedia.org]</p><p>What has the world come to...</p></htmltext>
<tokenext>A book using .NET is being referred to as AOUT ?
[ wikipedia.org ] What has the world come to.. .</tokentext>
<sentencetext>A book using .NET is being referred to as AOUT?
[wikipedia.org]What has the world come to...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089596</id>
	<title>Frosty P1ss!!11!!11oneone</title>
	<author>Anonymous</author>
	<datestamp>1265056140000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext>Yo mama likes to test my unit.  Uh huh huh huh, unit.</htmltext>
<tokenext>Yo mama likes to test my unit .
Uh huh huh huh , unit .</tokentext>
<sentencetext>Yo mama likes to test my unit.
Uh huh huh huh, unit.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31091078</id>
	<title>Re:Error coding...</title>
	<author>msclrhd</author>
	<datestamp>1265020380000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Error handing and reporting is very complex. This is mostly due to the complexities involved with different parts of a system interacting with each other.</p><p>Windows GetErrorInfo call will return NULL on the second call, despite no other COM calls made in-between. This took a while to understand what was happening.</p><p>Do you check that an IErrorInfo object is valid for the method you just called by using ISupportsErrorInfo? Do you check to see if this is a C#/.NET System.Exception (\_Exception) and then use ToString to get back the nice stack trace?</p><p>Does the Windows API call return a HRESULT, an NT error code, a registry API error code, a BOOL with a corresponding GetLastError call to get the details or something else? Are you checking errno on all C API calls? Or the appropriate error checking call for the API you are working with? Do you realise that some Windows APIs have different return code behaviour on Win9x and NT+ (e.g. the GDI calls)?</p><p>Do you guard all your COM calls written in C++ to ensure that a C++ exception does not leak outside the COM boundary? Do you report said exception as a HRESULT and IErrorInfo that lets you track down the problem?</p><p>Do you ensure that an exception is not thrown from outside a destructor? A thread function? A Windows/GTK/Qt/... event handler? Across language boundaries?</p><p>How do you present an error to the user? Are you showing the exception message (e.g. displaying E\_UNEXPECTED as "Catastrophic failure")?</p><p>It is not always possible to write perfect code. Developers will forget to check for access permissions to a file before writing to it, or just ignore any errors (did you know that std::ofstream will not report an error if the file it is writing to is read-only (at least with the Microsoft C++ implementation and Windows, without any fancy std::ios flags?)).</p><p>Have you ever dealt with infinite recursions that involve 3 or more functions? Do your COM/DBus/... calls check for/handle network failures? Do you report these in a friendly way to the user? Do you try to recover/reconnect in this case?</p><p>Knowing exactly what a system call does is also impossible, unless you have access to the source code for that particular configuration. The MSDN documentation is not reliable for Windows APIs, as it leaves a lot of the important stuff out. The POSIX documentation only covers the important/most common error cases.</p></htmltext>
<tokenext>Error handing and reporting is very complex .
This is mostly due to the complexities involved with different parts of a system interacting with each other.Windows GetErrorInfo call will return NULL on the second call , despite no other COM calls made in-between .
This took a while to understand what was happening.Do you check that an IErrorInfo object is valid for the method you just called by using ISupportsErrorInfo ?
Do you check to see if this is a C # /.NET System.Exception ( \ _Exception ) and then use ToString to get back the nice stack trace ? Does the Windows API call return a HRESULT , an NT error code , a registry API error code , a BOOL with a corresponding GetLastError call to get the details or something else ?
Are you checking errno on all C API calls ?
Or the appropriate error checking call for the API you are working with ?
Do you realise that some Windows APIs have different return code behaviour on Win9x and NT + ( e.g .
the GDI calls ) ? Do you guard all your COM calls written in C + + to ensure that a C + + exception does not leak outside the COM boundary ?
Do you report said exception as a HRESULT and IErrorInfo that lets you track down the problem ? Do you ensure that an exception is not thrown from outside a destructor ?
A thread function ?
A Windows/GTK/Qt/... event handler ?
Across language boundaries ? How do you present an error to the user ?
Are you showing the exception message ( e.g .
displaying E \ _UNEXPECTED as " Catastrophic failure " ) ? It is not always possible to write perfect code .
Developers will forget to check for access permissions to a file before writing to it , or just ignore any errors ( did you know that std : : ofstream will not report an error if the file it is writing to is read-only ( at least with the Microsoft C + + implementation and Windows , without any fancy std : : ios flags ?
) ) .Have you ever dealt with infinite recursions that involve 3 or more functions ?
Do your COM/DBus/... calls check for/handle network failures ?
Do you report these in a friendly way to the user ?
Do you try to recover/reconnect in this case ? Knowing exactly what a system call does is also impossible , unless you have access to the source code for that particular configuration .
The MSDN documentation is not reliable for Windows APIs , as it leaves a lot of the important stuff out .
The POSIX documentation only covers the important/most common error cases .</tokentext>
<sentencetext>Error handing and reporting is very complex.
This is mostly due to the complexities involved with different parts of a system interacting with each other.Windows GetErrorInfo call will return NULL on the second call, despite no other COM calls made in-between.
This took a while to understand what was happening.Do you check that an IErrorInfo object is valid for the method you just called by using ISupportsErrorInfo?
Do you check to see if this is a C#/.NET System.Exception (\_Exception) and then use ToString to get back the nice stack trace?Does the Windows API call return a HRESULT, an NT error code, a registry API error code, a BOOL with a corresponding GetLastError call to get the details or something else?
Are you checking errno on all C API calls?
Or the appropriate error checking call for the API you are working with?
Do you realise that some Windows APIs have different return code behaviour on Win9x and NT+ (e.g.
the GDI calls)?Do you guard all your COM calls written in C++ to ensure that a C++ exception does not leak outside the COM boundary?
Do you report said exception as a HRESULT and IErrorInfo that lets you track down the problem?Do you ensure that an exception is not thrown from outside a destructor?
A thread function?
A Windows/GTK/Qt/... event handler?
Across language boundaries?How do you present an error to the user?
Are you showing the exception message (e.g.
displaying E\_UNEXPECTED as "Catastrophic failure")?It is not always possible to write perfect code.
Developers will forget to check for access permissions to a file before writing to it, or just ignore any errors (did you know that std::ofstream will not report an error if the file it is writing to is read-only (at least with the Microsoft C++ implementation and Windows, without any fancy std::ios flags?
)).Have you ever dealt with infinite recursions that involve 3 or more functions?
Do your COM/DBus/... calls check for/handle network failures?
Do you report these in a friendly way to the user?
Do you try to recover/reconnect in this case?Knowing exactly what a system call does is also impossible, unless you have access to the source code for that particular configuration.
The MSDN documentation is not reliable for Windows APIs, as it leaves a lot of the important stuff out.
The POSIX documentation only covers the important/most common error cases.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089958</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31096792</id>
	<title>As is Beck's book...</title>
	<author>Anonymous</author>
	<datestamp>1265920380000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>Interesting that you should mention Kent Beck's book, as I too have read it recently and found it to be the shittiest pile of steaming turd at I've ever seen put into book form.   It was *SO* slow going, so condescending, and it was so sorely lacking in the way of cohesive rational arguments that if I hadn't been sold TDD through other means, I would have abandoned the concept altogether.  It might be okay as an introduction to a complete novice programmer, but if you've had any experience in the industry at all, I'd recommend avoiding it unless you either want something to put you to sleep, or you're a sucker for punishment.</p></htmltext>
<tokenext>Interesting that you should mention Kent Beck 's book , as I too have read it recently and found it to be the shittiest pile of steaming turd at I 've ever seen put into book form .
It was * SO * slow going , so condescending , and it was so sorely lacking in the way of cohesive rational arguments that if I had n't been sold TDD through other means , I would have abandoned the concept altogether .
It might be okay as an introduction to a complete novice programmer , but if you 've had any experience in the industry at all , I 'd recommend avoiding it unless you either want something to put you to sleep , or you 're a sucker for punishment .</tokentext>
<sentencetext>Interesting that you should mention Kent Beck's book, as I too have read it recently and found it to be the shittiest pile of steaming turd at I've ever seen put into book form.
It was *SO* slow going, so condescending, and it was so sorely lacking in the way of cohesive rational arguments that if I hadn't been sold TDD through other means, I would have abandoned the concept altogether.
It might be okay as an introduction to a complete novice programmer, but if you've had any experience in the industry at all, I'd recommend avoiding it unless you either want something to put you to sleep, or you're a sucker for punishment.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089894</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090276</id>
	<title>tl;dr</title>
	<author>Anonymous</author>
	<datestamp>1265016720000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext>tl;dr</htmltext>
<tokenext>tl ; dr</tokentext>
<sentencetext>tl;dr</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089958</id>
	<title>Error coding...</title>
	<author>girlintraining</author>
	<datestamp>1265057940000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>0</modscore>
	<htmltext><p>Could I go out on a limb here and ask why error handling is considered a black art, requiring truckloads of books to understand? I've done well following a few basic rules;</p><p>1. Know exactly what the system call does before you use it.<br>2. Check the return value of <i>every</i> one.<br>3. Check the permissions when you access a resource.<br>4. Blocking calls are a necessary evil. Putting them in the main loop is not.<br>5. Always check a pointer before you use it.<br>5a.<nobr> <wbr></nobr>...even if it is a return from a system call that never fails.<br>6. Build your project in pieces -- and try to cause as many different failure conditions as possible.<br>6a. Anything that could require new equipment if failure testing kills it? Use someone else's.<br>7. No matter how good your code is, that el cheapo power supply is waiting. And it is hungry.</p></htmltext>
<tokenext>Could I go out on a limb here and ask why error handling is considered a black art , requiring truckloads of books to understand ?
I 've done well following a few basic rules ; 1 .
Know exactly what the system call does before you use it.2 .
Check the return value of every one.3 .
Check the permissions when you access a resource.4 .
Blocking calls are a necessary evil .
Putting them in the main loop is not.5 .
Always check a pointer before you use it.5a .
...even if it is a return from a system call that never fails.6 .
Build your project in pieces -- and try to cause as many different failure conditions as possible.6a .
Anything that could require new equipment if failure testing kills it ?
Use someone else 's.7 .
No matter how good your code is , that el cheapo power supply is waiting .
And it is hungry .</tokentext>
<sentencetext>Could I go out on a limb here and ask why error handling is considered a black art, requiring truckloads of books to understand?
I've done well following a few basic rules;1.
Know exactly what the system call does before you use it.2.
Check the return value of every one.3.
Check the permissions when you access a resource.4.
Blocking calls are a necessary evil.
Putting them in the main loop is not.5.
Always check a pointer before you use it.5a.
...even if it is a return from a system call that never fails.6.
Build your project in pieces -- and try to cause as many different failure conditions as possible.6a.
Anything that could require new equipment if failure testing kills it?
Use someone else's.7.
No matter how good your code is, that el cheapo power supply is waiting.
And it is hungry.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090684</id>
	<title>Re:xUnit Test Patterns</title>
	<author>jgrahn</author>
	<datestamp>1265018700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>For anyone familiar with the basics of unit testing but struggling to implement it in real world scenarios, I'd strongly recommend xUnit Test Patterns: Refactoring Test Code by Gerard Meszaros.</p></div></blockquote><p>
Aargh! They managed to mention unit tests, patterns and refactoring in the same title!
</p><p>
Also, I really dislike xUnit, as I've seen it wedged into Python's unittest module and CPPUnit (C++).
It's a horrible design which just gets in the way -- I don't understand what valid reasons a book has to rely on it (except buzzword compilance).</p><blockquote><div><p>The idea is not only that automated testing is good, but that testable code is fundamentally better because it needs to be loosely coupled.</p></div></blockquote><p>And loosely coupled code is fundamentally better *why*?
"Because it can be easily unit tested" is the only argument I can swallow<nobr> <wbr></nobr>...
</p><p>
Loose coupling was a popular catchphrase in the early 1990s (along with Software Reuse),
but that kind of thinking is the source of lots of overly-general and vague code.</p></div>
	</htmltext>
<tokenext>For anyone familiar with the basics of unit testing but struggling to implement it in real world scenarios , I 'd strongly recommend xUnit Test Patterns : Refactoring Test Code by Gerard Meszaros .
Aargh ! They managed to mention unit tests , patterns and refactoring in the same title !
Also , I really dislike xUnit , as I 've seen it wedged into Python 's unittest module and CPPUnit ( C + + ) .
It 's a horrible design which just gets in the way -- I do n't understand what valid reasons a book has to rely on it ( except buzzword compilance ) .The idea is not only that automated testing is good , but that testable code is fundamentally better because it needs to be loosely coupled.And loosely coupled code is fundamentally better * why * ?
" Because it can be easily unit tested " is the only argument I can swallow .. . Loose coupling was a popular catchphrase in the early 1990s ( along with Software Reuse ) , but that kind of thinking is the source of lots of overly-general and vague code .</tokentext>
<sentencetext>For anyone familiar with the basics of unit testing but struggling to implement it in real world scenarios, I'd strongly recommend xUnit Test Patterns: Refactoring Test Code by Gerard Meszaros.
Aargh! They managed to mention unit tests, patterns and refactoring in the same title!
Also, I really dislike xUnit, as I've seen it wedged into Python's unittest module and CPPUnit (C++).
It's a horrible design which just gets in the way -- I don't understand what valid reasons a book has to rely on it (except buzzword compilance).The idea is not only that automated testing is good, but that testable code is fundamentally better because it needs to be loosely coupled.And loosely coupled code is fundamentally better *why*?
"Because it can be easily unit tested" is the only argument I can swallow ...

Loose coupling was a popular catchphrase in the early 1990s (along with Software Reuse),
but that kind of thinking is the source of lots of overly-general and vague code.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089842</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089934</id>
	<title>Re:Frosty P1ss!!11!!11oneone</title>
	<author>Anonymous</author>
	<datestamp>1265057820000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext><p>So she uses your unit to test the magnification level of newer electron microscopes?  Have she come across one yet that has the resolution to find yours?</p></htmltext>
<tokenext>So she uses your unit to test the magnification level of newer electron microscopes ?
Have she come across one yet that has the resolution to find yours ?</tokentext>
<sentencetext>So she uses your unit to test the magnification level of newer electron microscopes?
Have she come across one yet that has the resolution to find yours?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089596</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089606</id>
	<title>Going to be buying this</title>
	<author>PmanAce</author>
	<datestamp>1265056200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>This will fit nicely besides my msbuild book collecting dust on my desk. Jokes aside, we do tons of unit testing and I have never seen a book solely on unit testing for<nobr> <wbr></nobr>.NET with TDD, mocking, etc.

I'm stoked!</htmltext>
<tokenext>This will fit nicely besides my msbuild book collecting dust on my desk .
Jokes aside , we do tons of unit testing and I have never seen a book solely on unit testing for .NET with TDD , mocking , etc .
I 'm stoked !</tokentext>
<sentencetext>This will fit nicely besides my msbuild book collecting dust on my desk.
Jokes aside, we do tons of unit testing and I have never seen a book solely on unit testing for .NET with TDD, mocking, etc.
I'm stoked!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31091176</id>
	<title>Cover</title>
	<author>dbialac</author>
	<datestamp>1265020800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Somebody likes their Anime a bit too much.</p></htmltext>
<tokenext>Somebody likes their Anime a bit too much .</tokentext>
<sentencetext>Somebody likes their Anime a bit too much.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090840</id>
	<title>Pet peeve - the purpose of testing</title>
	<author>Zoxed</author>
	<datestamp>1265019360000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Rule #1 of all testing: The purpose of testing is not to prove that the code works: the purpose of testing is to *try to break* the program.<br>(A good tester is Evil: extremes of values, try to get it to divide by 0 etc.)</p></htmltext>
<tokenext>Rule # 1 of all testing : The purpose of testing is not to prove that the code works : the purpose of testing is to * try to break * the program .
( A good tester is Evil : extremes of values , try to get it to divide by 0 etc .
)</tokentext>
<sentencetext>Rule #1 of all testing: The purpose of testing is not to prove that the code works: the purpose of testing is to *try to break* the program.
(A good tester is Evil: extremes of values, try to get it to divide by 0 etc.
)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089610</id>
	<title>heh heh</title>
	<author>Anonymous</author>
	<datestamp>1265056260000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>0</modscore>
	<htmltext>unit heh heh</htmltext>
<tokenext>unit heh heh</tokentext>
<sentencetext>unit heh heh</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31092382</id>
	<title>Re:Error coding...</title>
	<author>Anonymous</author>
	<datestamp>1265025840000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><blockquote><div><p>1. Know exactly what the system call does before you use it.<br>2. Check the return value of every one.<br>3. Check the permissions when you access a resource.<br>4. Blocking calls are a necessary evil. Putting them in the main loop is not.<br>5. Always check a pointer before you use it.<br>5a.<nobr> <wbr></nobr>...even if it is a return from a system call that never fails.</p></div></blockquote><p>Catching errors, which is all that you've listed, is so easy that any moron can do it.  Error handling, i.e. knowing what to do with them afterwards, is the hard part.</p><p>(Displaying an error and terminating the program is almost always the wrong answer.)</p></div>
	</htmltext>
<tokenext>1 .
Know exactly what the system call does before you use it.2 .
Check the return value of every one.3 .
Check the permissions when you access a resource.4 .
Blocking calls are a necessary evil .
Putting them in the main loop is not.5 .
Always check a pointer before you use it.5a .
...even if it is a return from a system call that never fails.Catching errors , which is all that you 've listed , is so easy that any moron can do it .
Error handling , i.e .
knowing what to do with them afterwards , is the hard part .
( Displaying an error and terminating the program is almost always the wrong answer .
)</tokentext>
<sentencetext>1.
Know exactly what the system call does before you use it.2.
Check the return value of every one.3.
Check the permissions when you access a resource.4.
Blocking calls are a necessary evil.
Putting them in the main loop is not.5.
Always check a pointer before you use it.5a.
...even if it is a return from a system call that never fails.Catching errors, which is all that you've listed, is so easy that any moron can do it.
Error handling, i.e.
knowing what to do with them afterwards, is the hard part.
(Displaying an error and terminating the program is almost always the wrong answer.
)
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089958</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089674</id>
	<title>Wrong Unit</title>
	<author>techno-vampire</author>
	<datestamp>1265056500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>When I first saw the article's title, I thought that this was the <a href="http://en.wikipedia.org/wiki/UNIT" title="wikipedia.org">UNIT</a> [wikipedia.org] it was referring to.  Says a lot about the type of people I hang out with, doesn't it?</htmltext>
<tokenext>When I first saw the article 's title , I thought that this was the UNIT [ wikipedia.org ] it was referring to .
Says a lot about the type of people I hang out with , does n't it ?</tokentext>
<sentencetext>When I first saw the article's title, I thought that this was the UNIT [wikipedia.org] it was referring to.
Says a lot about the type of people I hang out with, doesn't it?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090532</id>
	<title>Just wondering...</title>
	<author>Anonymous</author>
	<datestamp>1265018160000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>What's<nobr> <wbr></nobr>.NET?</p></htmltext>
<tokenext>What 's .NET ?</tokentext>
<sentencetext>What's .NET?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090192</id>
	<title>Re:xUnit Test Patterns</title>
	<author>prockcore</author>
	<datestamp>1265016180000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><blockquote><div><p>but that testable code is fundamentally better because it needs to be loosely coupled.</p></div></blockquote><p>I disagree.  It builds a false sense of security, and artificially increases complexity.  You end up making your units smaller and smaller in order to keep each item discrete and separate.</p><p>It's like a car built out of LEGO, sure you can take any piece off and attach it anywhere else, but the problems are not with the individual pieces, but how you put them together.. and you aren't testing that if you're only doing unit testing.</p></div>
	</htmltext>
<tokenext>but that testable code is fundamentally better because it needs to be loosely coupled.I disagree .
It builds a false sense of security , and artificially increases complexity .
You end up making your units smaller and smaller in order to keep each item discrete and separate.It 's like a car built out of LEGO , sure you can take any piece off and attach it anywhere else , but the problems are not with the individual pieces , but how you put them together.. and you are n't testing that if you 're only doing unit testing .</tokentext>
<sentencetext>but that testable code is fundamentally better because it needs to be loosely coupled.I disagree.
It builds a false sense of security, and artificially increases complexity.
You end up making your units smaller and smaller in order to keep each item discrete and separate.It's like a car built out of LEGO, sure you can take any piece off and attach it anywhere else, but the problems are not with the individual pieces, but how you put them together.. and you aren't testing that if you're only doing unit testing.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089842</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31091002</id>
	<title>Does he back up anything he says</title>
	<author>TheCycoONE</author>
	<datestamp>1265020080000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>I was at Dev Days in Toronto a few months ago, and one of the speakers brought up a very good point relating to different software engineering methodologies.  He said that despite all the literature written on them, and the huge amount of money involved, there has been very few good studies on the effectiveness of various techniques.  He went on to challenge the effectiveness of unit testing and 'agile development.'  The only methodology he had found studies to demonstrate significant effectiveness was peer code review.</p><p>This brings me to my question.  Does this book say anything concrete with citations to back it up, or is it all the opinion of one person?</p></htmltext>
<tokenext>I was at Dev Days in Toronto a few months ago , and one of the speakers brought up a very good point relating to different software engineering methodologies .
He said that despite all the literature written on them , and the huge amount of money involved , there has been very few good studies on the effectiveness of various techniques .
He went on to challenge the effectiveness of unit testing and 'agile development .
' The only methodology he had found studies to demonstrate significant effectiveness was peer code review.This brings me to my question .
Does this book say anything concrete with citations to back it up , or is it all the opinion of one person ?</tokentext>
<sentencetext>I was at Dev Days in Toronto a few months ago, and one of the speakers brought up a very good point relating to different software engineering methodologies.
He said that despite all the literature written on them, and the huge amount of money involved, there has been very few good studies on the effectiveness of various techniques.
He went on to challenge the effectiveness of unit testing and 'agile development.
'  The only methodology he had found studies to demonstrate significant effectiveness was peer code review.This brings me to my question.
Does this book say anything concrete with citations to back it up, or is it all the opinion of one person?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31098596</id>
	<title>Re:Unit testing is not a silver bullet</title>
	<author>Civil\_Disobedient</author>
	<datestamp>1265898540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>1. trivial to write, therefore useless<br>2. impossible to write, therefore useless</i></p><p>This has been my experience as well.</p></htmltext>
<tokenext>1. trivial to write , therefore useless2 .
impossible to write , therefore uselessThis has been my experience as well .</tokentext>
<sentencetext>1. trivial to write, therefore useless2.
impossible to write, therefore uselessThis has been my experience as well.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090310</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31091268</id>
	<title>Engineering not an art?</title>
	<author>fm6</author>
	<datestamp>1265020980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You seem to think that "art" refers to something that is fundamentally <a href="http://www.abcgallery.com/M/matisse/matisse51.html" title="abcgallery.com">mysterious</a> [abcgallery.com]. A lot of art is, but that's not an intrinsic feature. The word itself has a lot different meanings. Here are some of the most fundamental, from the Oxford English Dictionary.</p><p>
&nbsp; &nbsp; &nbsp; &nbsp; 1. Skill in doing something, esp. as the result of knowledge or practice.<br>
&nbsp; &nbsp; &nbsp; &nbsp; 2. Skill in the practical application of the principles of a particular field of knowledge or learning; technical skill. Ob<br>
&nbsp; &nbsp; &nbsp; &nbsp; 3. As a count noun.<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; a. A practical application of knowledge; (hence) something which can be achieved or understood by the employment of skill and knowledge...<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; b. A practical pursuit or trade of a skilled nature, a craft; an activity that can be achieved or mastered by the application of specialist skills...<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; c. A company of craftsmen; a guild.<br>
&nbsp; &nbsp; &nbsp; &nbsp; 4. With modifying word or words denoting skill in a particular craft, profession, or other sphere of activity.<br>
&nbsp; &nbsp; &nbsp; &nbsp; 5. An acquired ability of any kind; a skill at doing a specified thing, typically acquired through study and practice; a knack. Freq. in the art of &mdash;.</p><p>Before you can offer an informed opinion as to what is and is not engineering, you need to read something by Henry Petroski. He defines it as "the art of rearranging the materials and forces of nature".</p></htmltext>
<tokenext>You seem to think that " art " refers to something that is fundamentally mysterious [ abcgallery.com ] .
A lot of art is , but that 's not an intrinsic feature .
The word itself has a lot different meanings .
Here are some of the most fundamental , from the Oxford English Dictionary .
        1 .
Skill in doing something , esp .
as the result of knowledge or practice .
        2 .
Skill in the practical application of the principles of a particular field of knowledge or learning ; technical skill .
Ob         3 .
As a count noun .
              a. A practical application of knowledge ; ( hence ) something which can be achieved or understood by the employment of skill and knowledge.. .               b. A practical pursuit or trade of a skilled nature , a craft ; an activity that can be achieved or mastered by the application of specialist skills.. .               c. A company of craftsmen ; a guild .
        4 .
With modifying word or words denoting skill in a particular craft , profession , or other sphere of activity .
        5 .
An acquired ability of any kind ; a skill at doing a specified thing , typically acquired through study and practice ; a knack .
Freq. in the art of    .Before you can offer an informed opinion as to what is and is not engineering , you need to read something by Henry Petroski .
He defines it as " the art of rearranging the materials and forces of nature " .</tokentext>
<sentencetext>You seem to think that "art" refers to something that is fundamentally mysterious [abcgallery.com].
A lot of art is, but that's not an intrinsic feature.
The word itself has a lot different meanings.
Here are some of the most fundamental, from the Oxford English Dictionary.
        1.
Skill in doing something, esp.
as the result of knowledge or practice.
        2.
Skill in the practical application of the principles of a particular field of knowledge or learning; technical skill.
Ob
        3.
As a count noun.
              a. A practical application of knowledge; (hence) something which can be achieved or understood by the employment of skill and knowledge...
              b. A practical pursuit or trade of a skilled nature, a craft; an activity that can be achieved or mastered by the application of specialist skills...
              c. A company of craftsmen; a guild.
        4.
With modifying word or words denoting skill in a particular craft, profession, or other sphere of activity.
        5.
An acquired ability of any kind; a skill at doing a specified thing, typically acquired through study and practice; a knack.
Freq. in the art of —.Before you can offer an informed opinion as to what is and is not engineering, you need to read something by Henry Petroski.
He defines it as "the art of rearranging the materials and forces of nature".</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089888</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090352</id>
	<title>Re:Error coding...</title>
	<author>S77IM</author>
	<datestamp>1265017140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Maybe you're on the wrong thread?  It's a review of a book about unit testing, not error handling.  The two are only moderately related, as far as programming concerns go.</p><p>
&nbsp; -- 77IM</p></htmltext>
<tokenext>Maybe you 're on the wrong thread ?
It 's a review of a book about unit testing , not error handling .
The two are only moderately related , as far as programming concerns go .
  -- 77IM</tokentext>
<sentencetext>Maybe you're on the wrong thread?
It's a review of a book about unit testing, not error handling.
The two are only moderately related, as far as programming concerns go.
  -- 77IM</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089958</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089842</id>
	<title>xUnit Test Patterns</title>
	<author>Nasarius</author>
	<datestamp>1265057220000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext>For anyone familiar with the basics of unit testing but struggling to implement it in real world scenarios, I'd strongly recommend <i>xUnit Test Patterns: Refactoring Test Code</i> by Gerard Meszaros.
<br> <br>
The idea is not only that automated testing is good, but that <i>testable</i> code is fundamentally better because it needs to be loosely coupled. I still struggle to follow TDD in many scenarios, especially where I'm closely interacting with system APIs, but just reading xUnit Test Patterns has given me tons of ideas that improved my code.</htmltext>
<tokenext>For anyone familiar with the basics of unit testing but struggling to implement it in real world scenarios , I 'd strongly recommend xUnit Test Patterns : Refactoring Test Code by Gerard Meszaros .
The idea is not only that automated testing is good , but that testable code is fundamentally better because it needs to be loosely coupled .
I still struggle to follow TDD in many scenarios , especially where I 'm closely interacting with system APIs , but just reading xUnit Test Patterns has given me tons of ideas that improved my code .</tokentext>
<sentencetext>For anyone familiar with the basics of unit testing but struggling to implement it in real world scenarios, I'd strongly recommend xUnit Test Patterns: Refactoring Test Code by Gerard Meszaros.
The idea is not only that automated testing is good, but that testable code is fundamentally better because it needs to be loosely coupled.
I still struggle to follow TDD in many scenarios, especially where I'm closely interacting with system APIs, but just reading xUnit Test Patterns has given me tons of ideas that improved my code.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090060</id>
	<title>Good news for you all...</title>
	<author>Anonymous</author>
	<datestamp>1265015340000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>You're all at increased risk for heart disease!<nobr> <wbr></nobr>:-)  Bye bye!</p><p><a href="http://uk.reuters.com/article/idUKTRE61900L20100210" title="reuters.com" rel="nofollow">http://uk.reuters.com/article/idUKTRE61900L20100210</a> [reuters.com]</p></htmltext>
<tokenext>You 're all at increased risk for heart disease !
: - ) Bye bye ! http : //uk.reuters.com/article/idUKTRE61900L20100210 [ reuters.com ]</tokentext>
<sentencetext>You're all at increased risk for heart disease!
:-)  Bye bye!http://uk.reuters.com/article/idUKTRE61900L20100210 [reuters.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090492</id>
	<title>Obligatory</title>
	<author>TXFRATBoy</author>
	<datestamp>1265017980000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>This is Slashdot...the only unit testing is by manual means...ZING!</htmltext>
<tokenext>This is Slashdot...the only unit testing is by manual means...ZING !</tokentext>
<sentencetext>This is Slashdot...the only unit testing is by manual means...ZING!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31093160</id>
	<title>Re:xUnit Test Patterns</title>
	<author>geekoid</author>
	<datestamp>1265030340000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Smaller piece are easier to test, easier to maintain, easier to document and several reduce the chance of putting a new bug when changes need to be made.</p><p>Unit testing helps enforce small code pieces.</p><p>"f lots of overly-general and vague code."</p><p>If that's true, then you have dealt with some extreme poor programmers. I suggest working with software engineers instead of programmers.</p><p>re-use of common piece is a good thing, and loosely coupled code makes the easier to do as well.</p></htmltext>
<tokenext>Smaller piece are easier to test , easier to maintain , easier to document and several reduce the chance of putting a new bug when changes need to be made.Unit testing helps enforce small code pieces .
" f lots of overly-general and vague code .
" If that 's true , then you have dealt with some extreme poor programmers .
I suggest working with software engineers instead of programmers.re-use of common piece is a good thing , and loosely coupled code makes the easier to do as well .</tokentext>
<sentencetext>Smaller piece are easier to test, easier to maintain, easier to document and several reduce the chance of putting a new bug when changes need to be made.Unit testing helps enforce small code pieces.
"f lots of overly-general and vague code.
"If that's true, then you have dealt with some extreme poor programmers.
I suggest working with software engineers instead of programmers.re-use of common piece is a good thing, and loosely coupled code makes the easier to do as well.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090684</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31093776</id>
	<title>Re:xUnit Test Patterns</title>
	<author>Canberra Bob</author>
	<datestamp>1265033460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>And loosely coupled code is fundamentally better *why*?<br>"Because it can be easily unit tested" is the only argument I can swallow<nobr> <wbr></nobr>...</p></div><p>On the past few systems I have worked on I have had the "fun" job of adding new features to existing legacy code.  Adding features to the existing tightly coupled code was a nightmare, finding what did exactly what took ages, some functionality was partially performed in several different locations - each relying on the previous part - and the slightest spec change would need the whole thing to be re-done yet again.  The exact same spec changes (eg a new element in a message) were trivial to do in the applications I had written from scratch as each change only needed to be done in a single location and was easy to test.  I may have seen some extreme cases but I have certainly become a "loosely coupled" evangelist since.</p></div>
	</htmltext>
<tokenext>And loosely coupled code is fundamentally better * why * ?
" Because it can be easily unit tested " is the only argument I can swallow ...On the past few systems I have worked on I have had the " fun " job of adding new features to existing legacy code .
Adding features to the existing tightly coupled code was a nightmare , finding what did exactly what took ages , some functionality was partially performed in several different locations - each relying on the previous part - and the slightest spec change would need the whole thing to be re-done yet again .
The exact same spec changes ( eg a new element in a message ) were trivial to do in the applications I had written from scratch as each change only needed to be done in a single location and was easy to test .
I may have seen some extreme cases but I have certainly become a " loosely coupled " evangelist since .</tokentext>
<sentencetext>And loosely coupled code is fundamentally better *why*?
"Because it can be easily unit tested" is the only argument I can swallow ...On the past few systems I have worked on I have had the "fun" job of adding new features to existing legacy code.
Adding features to the existing tightly coupled code was a nightmare, finding what did exactly what took ages, some functionality was partially performed in several different locations - each relying on the previous part - and the slightest spec change would need the whole thing to be re-done yet again.
The exact same spec changes (eg a new element in a message) were trivial to do in the applications I had written from scratch as each change only needed to be done in a single location and was easy to test.
I may have seen some extreme cases but I have certainly become a "loosely coupled" evangelist since.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090684</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090792</id>
	<title>Re:Error coding...</title>
	<author>jgrahn</author>
	<datestamp>1265019240000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><blockquote><div><p>Could I go out on a limb here and ask why error handling is considered a black art, requiring truckloads of books to understand? I've done well following a few basic rules;
</p><p>
1. Know exactly what the system call does before you use it.<br>
2. Check the return value of every one.<br>
3. Check the permissions when you access a resource.<br>
4. Blocking calls are a necessary evil. Putting them in the main loop is not.<br>
5. Always check a pointer before you use it.<br><nobr> <wbr></nobr>...</p></div></blockquote><p>
*Detecting* the problem isn't hard. What's hard is *handling* it -- and there was nothing about that on your list. Hint: calling <tt>abort(2)</tt> is not always acceptable.</p></div>
	</htmltext>
<tokenext>Could I go out on a limb here and ask why error handling is considered a black art , requiring truckloads of books to understand ?
I 've done well following a few basic rules ; 1 .
Know exactly what the system call does before you use it .
2. Check the return value of every one .
3. Check the permissions when you access a resource .
4. Blocking calls are a necessary evil .
Putting them in the main loop is not .
5. Always check a pointer before you use it .
.. . * Detecting * the problem is n't hard .
What 's hard is * handling * it -- and there was nothing about that on your list .
Hint : calling abort ( 2 ) is not always acceptable .</tokentext>
<sentencetext>Could I go out on a limb here and ask why error handling is considered a black art, requiring truckloads of books to understand?
I've done well following a few basic rules;

1.
Know exactly what the system call does before you use it.
2. Check the return value of every one.
3. Check the permissions when you access a resource.
4. Blocking calls are a necessary evil.
Putting them in the main loop is not.
5. Always check a pointer before you use it.
...
*Detecting* the problem isn't hard.
What's hard is *handling* it -- and there was nothing about that on your list.
Hint: calling abort(2) is not always acceptable.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089958</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089888</id>
	<title>It's not art, it's basic engineering</title>
	<author>Anonymous</author>
	<datestamp>1265057580000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>The only part that is an "art" is working out how to successfully isolate the component that you're trying to test. For simple components at lower layers (typically data CRUD) it's not so hard. Once you find you're having to jump through hoops to set up your stubs, it gets harder to "fake" them successfully and becomes a more error prone and time consuming process. It can also be difficult if there's security in the way. The very checks you've put in to prevent security violations now have to be worked around or bypassed for your unit tests. There's also a danger of becoming too confident in your code because it passes the test when run against stub data.  You may find there's a bug specific to the interfaces you've stubbed. (For example a bug in a vendor's database driver, or a bug in your data access framework that doesn't show up against your stub).</p><p>All of those distracting side issues and complications aside, we are dealing with fundamental engineering principles. Build a component, test a component. Nothing could be simpler, in principle. So it's disappointing when developers get so caught up in the side issues that they resist unit testing. There does come a point where working around obstacles makes unit testing hard and you have to way benefit against cost and ask yourself how realistic the test is. But you don't go into a project assuming every component is too hard to unit test. That's just lazy and self-defeating. It comes down to the simple fact that many programmers aren't very good at breaking down a problem. In industries where their work was more transparent, they wouldn't last long. In software development where your code is abstract and the fruit of your work takes a long time to get to production, bad developers remain.</p></htmltext>
<tokenext>The only part that is an " art " is working out how to successfully isolate the component that you 're trying to test .
For simple components at lower layers ( typically data CRUD ) it 's not so hard .
Once you find you 're having to jump through hoops to set up your stubs , it gets harder to " fake " them successfully and becomes a more error prone and time consuming process .
It can also be difficult if there 's security in the way .
The very checks you 've put in to prevent security violations now have to be worked around or bypassed for your unit tests .
There 's also a danger of becoming too confident in your code because it passes the test when run against stub data .
You may find there 's a bug specific to the interfaces you 've stubbed .
( For example a bug in a vendor 's database driver , or a bug in your data access framework that does n't show up against your stub ) .All of those distracting side issues and complications aside , we are dealing with fundamental engineering principles .
Build a component , test a component .
Nothing could be simpler , in principle .
So it 's disappointing when developers get so caught up in the side issues that they resist unit testing .
There does come a point where working around obstacles makes unit testing hard and you have to way benefit against cost and ask yourself how realistic the test is .
But you do n't go into a project assuming every component is too hard to unit test .
That 's just lazy and self-defeating .
It comes down to the simple fact that many programmers are n't very good at breaking down a problem .
In industries where their work was more transparent , they would n't last long .
In software development where your code is abstract and the fruit of your work takes a long time to get to production , bad developers remain .</tokentext>
<sentencetext>The only part that is an "art" is working out how to successfully isolate the component that you're trying to test.
For simple components at lower layers (typically data CRUD) it's not so hard.
Once you find you're having to jump through hoops to set up your stubs, it gets harder to "fake" them successfully and becomes a more error prone and time consuming process.
It can also be difficult if there's security in the way.
The very checks you've put in to prevent security violations now have to be worked around or bypassed for your unit tests.
There's also a danger of becoming too confident in your code because it passes the test when run against stub data.
You may find there's a bug specific to the interfaces you've stubbed.
(For example a bug in a vendor's database driver, or a bug in your data access framework that doesn't show up against your stub).All of those distracting side issues and complications aside, we are dealing with fundamental engineering principles.
Build a component, test a component.
Nothing could be simpler, in principle.
So it's disappointing when developers get so caught up in the side issues that they resist unit testing.
There does come a point where working around obstacles makes unit testing hard and you have to way benefit against cost and ask yourself how realistic the test is.
But you don't go into a project assuming every component is too hard to unit test.
That's just lazy and self-defeating.
It comes down to the simple fact that many programmers aren't very good at breaking down a problem.
In industries where their work was more transparent, they wouldn't last long.
In software development where your code is abstract and the fruit of your work takes a long time to get to production, bad developers remain.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090006</id>
	<title>Related: Working Effectively with Legacy Code</title>
	<author>noidentity</author>
	<datestamp>1265014980000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>I've read several unit testing books recently, and another I found somewhat useful is Michael Feathers' <a href="http://news.slashdot.org/article.pl?sid=08/09/29/1328206" title="slashdot.org"> <i> Working Effectively with Legacy Code</i></a> [slashdot.org]. It has all sorts of techniques for testing legacy code, i.e. code that wasn't designed for testability, and which you want to make as few modifications to. So he gets into techniques like putting a local header file to replace the normal one for some class used by the code, so that you can write a replacement class (mock) that behaves in a way that better exercises the code. Unfortunately Feathers' book is also somewhat tiring to read, due to a verbose writing style and rough editing, but I don't know anything better.</p></htmltext>
<tokenext>I 've read several unit testing books recently , and another I found somewhat useful is Michael Feathers ' Working Effectively with Legacy Code [ slashdot.org ] .
It has all sorts of techniques for testing legacy code , i.e .
code that was n't designed for testability , and which you want to make as few modifications to .
So he gets into techniques like putting a local header file to replace the normal one for some class used by the code , so that you can write a replacement class ( mock ) that behaves in a way that better exercises the code .
Unfortunately Feathers ' book is also somewhat tiring to read , due to a verbose writing style and rough editing , but I do n't know anything better .</tokentext>
<sentencetext>I've read several unit testing books recently, and another I found somewhat useful is Michael Feathers'   Working Effectively with Legacy Code [slashdot.org].
It has all sorts of techniques for testing legacy code, i.e.
code that wasn't designed for testability, and which you want to make as few modifications to.
So he gets into techniques like putting a local header file to replace the normal one for some class used by the code, so that you can write a replacement class (mock) that behaves in a way that better exercises the code.
Unfortunately Feathers' book is also somewhat tiring to read, due to a verbose writing style and rough editing, but I don't know anything better.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31095774</id>
	<title>Re:Does he back up anything he says</title>
	<author>Anonymous</author>
	<datestamp>1265046180000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Microsoft has a very long standing proven commitment to not caring about quality. Perhaps when attending such marketing events it would advisable to first question the speaker rather than what's spoken?</p></htmltext>
<tokenext>Microsoft has a very long standing proven commitment to not caring about quality .
Perhaps when attending such marketing events it would advisable to first question the speaker rather than what 's spoken ?</tokentext>
<sentencetext>Microsoft has a very long standing proven commitment to not caring about quality.
Perhaps when attending such marketing events it would advisable to first question the speaker rather than what's spoken?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31091002</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31091182</id>
	<title>Re:xUnit Test Patterns</title>
	<author>shutdown -p now</author>
	<datestamp>1265020800000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>The idea is not only that automated testing is good, but that testable code is fundamentally better because it needs to be loosely coupled.</p></div><p>Which is a faulty assumption. Coming from this perspective, you want to unit test everything, and so you need to make everything loosely coupled. But the latter is not free, and sometimes the cost can be hefty - where a simple coding pattern would do before (say, a static factory method), you now get the mess with interface for every single class in your program, abstract factories everywhere (or IoC/DI with its maze of XML configs).</p><p>Ultimately, you write larger amounts of code that is harder to follow and harder to maintain, for 1) a real benefit of being able to unit test it, and 2) for an illusory benefit of being able to extend it easier. The reason why that last benefit is illusory is because, in most cases, you'll never actually use it, and in most cases when you do use it, the cost of maintaining the loosely coupled code up to that point is actually much more than the price you'd have paid for refactoring it to suit your new needs if you left it simple (and more coupled) originally.</p><p>Also, it does promote some patterns that are actively harmful. For example, in C#, methods are not virtual by default, and it's a <a href="http://www.artima.com/intv/nonvirtual.html" title="artima.com">conscious design decision</a> [artima.com] to avoid the versioning problem with <a href="http://blogs.msdn.com/ericlippert/archive/2004/01/07/virtual-methods-and-brittle-base-classes.aspx" title="msdn.com">brittle base classes</a> [msdn.com]. But "testable code" must have all methods virtual in order for them to be mocked! So you either have to carefully consider the brittle base class issue for <em>every single method you write</em>, or just say "screw them all" and forget about it (the Java approach). The latter is what most people choose, and, naturally, it doesn't exactly increase product quality.</p><p>Of course, this all hinges on the definition of "testable code". The problem with that is that it's essentially defined by the limitations of current mainstream unit testing frameworks, particularly their mocking capabilities. "Oh, you need interfaces everywhere because we can't mock sealed classes or non-virtual members". And then a convenient explanation is concocted that says that this style is actually "testable code", and it's an inherently good one, regardless of any testing.</p><p>Gladly, TypeMock is about the only sane<nobr> <wbr></nobr>.NET unit testing framework out there - it lets you mock <em>anything</em>. Sealed classes, static members, constructors, non-virtual methods... you name it, it's there. And that is as it should be. It lets you design your API, thinking of issues that are actually <em>relevant</em> to that design - carefully considering versioning problems, not forgetting ease of use and conciseness, and providing the degree of decoupling that is relevant to a specific task at hand - with no regard to any limitations the testing framework sets.</p><p>It's no surprise that some people from the TDD community are <a href="http://vkreynin.wordpress.com/2007/04/10/typemock-too-powerful-to-use/" title="wordpress.com">hostile towards TypeMock</a> [wordpress.com] because it's "too powerful", and doesn't force the programmer to conform to their vision of "testable code". But it's rather ironic, anyway, given how TDD itself is by and large an offshoot of Agile, which had always promoted principles such as "do what works" and "make things no more complicated than necessary".</p></div>
	</htmltext>
<tokenext>The idea is not only that automated testing is good , but that testable code is fundamentally better because it needs to be loosely coupled.Which is a faulty assumption .
Coming from this perspective , you want to unit test everything , and so you need to make everything loosely coupled .
But the latter is not free , and sometimes the cost can be hefty - where a simple coding pattern would do before ( say , a static factory method ) , you now get the mess with interface for every single class in your program , abstract factories everywhere ( or IoC/DI with its maze of XML configs ) .Ultimately , you write larger amounts of code that is harder to follow and harder to maintain , for 1 ) a real benefit of being able to unit test it , and 2 ) for an illusory benefit of being able to extend it easier .
The reason why that last benefit is illusory is because , in most cases , you 'll never actually use it , and in most cases when you do use it , the cost of maintaining the loosely coupled code up to that point is actually much more than the price you 'd have paid for refactoring it to suit your new needs if you left it simple ( and more coupled ) originally.Also , it does promote some patterns that are actively harmful .
For example , in C # , methods are not virtual by default , and it 's a conscious design decision [ artima.com ] to avoid the versioning problem with brittle base classes [ msdn.com ] .
But " testable code " must have all methods virtual in order for them to be mocked !
So you either have to carefully consider the brittle base class issue for every single method you write , or just say " screw them all " and forget about it ( the Java approach ) .
The latter is what most people choose , and , naturally , it does n't exactly increase product quality.Of course , this all hinges on the definition of " testable code " .
The problem with that is that it 's essentially defined by the limitations of current mainstream unit testing frameworks , particularly their mocking capabilities .
" Oh , you need interfaces everywhere because we ca n't mock sealed classes or non-virtual members " .
And then a convenient explanation is concocted that says that this style is actually " testable code " , and it 's an inherently good one , regardless of any testing.Gladly , TypeMock is about the only sane .NET unit testing framework out there - it lets you mock anything .
Sealed classes , static members , constructors , non-virtual methods... you name it , it 's there .
And that is as it should be .
It lets you design your API , thinking of issues that are actually relevant to that design - carefully considering versioning problems , not forgetting ease of use and conciseness , and providing the degree of decoupling that is relevant to a specific task at hand - with no regard to any limitations the testing framework sets.It 's no surprise that some people from the TDD community are hostile towards TypeMock [ wordpress.com ] because it 's " too powerful " , and does n't force the programmer to conform to their vision of " testable code " .
But it 's rather ironic , anyway , given how TDD itself is by and large an offshoot of Agile , which had always promoted principles such as " do what works " and " make things no more complicated than necessary " .</tokentext>
<sentencetext>The idea is not only that automated testing is good, but that testable code is fundamentally better because it needs to be loosely coupled.Which is a faulty assumption.
Coming from this perspective, you want to unit test everything, and so you need to make everything loosely coupled.
But the latter is not free, and sometimes the cost can be hefty - where a simple coding pattern would do before (say, a static factory method), you now get the mess with interface for every single class in your program, abstract factories everywhere (or IoC/DI with its maze of XML configs).Ultimately, you write larger amounts of code that is harder to follow and harder to maintain, for 1) a real benefit of being able to unit test it, and 2) for an illusory benefit of being able to extend it easier.
The reason why that last benefit is illusory is because, in most cases, you'll never actually use it, and in most cases when you do use it, the cost of maintaining the loosely coupled code up to that point is actually much more than the price you'd have paid for refactoring it to suit your new needs if you left it simple (and more coupled) originally.Also, it does promote some patterns that are actively harmful.
For example, in C#, methods are not virtual by default, and it's a conscious design decision [artima.com] to avoid the versioning problem with brittle base classes [msdn.com].
But "testable code" must have all methods virtual in order for them to be mocked!
So you either have to carefully consider the brittle base class issue for every single method you write, or just say "screw them all" and forget about it (the Java approach).
The latter is what most people choose, and, naturally, it doesn't exactly increase product quality.Of course, this all hinges on the definition of "testable code".
The problem with that is that it's essentially defined by the limitations of current mainstream unit testing frameworks, particularly their mocking capabilities.
"Oh, you need interfaces everywhere because we can't mock sealed classes or non-virtual members".
And then a convenient explanation is concocted that says that this style is actually "testable code", and it's an inherently good one, regardless of any testing.Gladly, TypeMock is about the only sane .NET unit testing framework out there - it lets you mock anything.
Sealed classes, static members, constructors, non-virtual methods... you name it, it's there.
And that is as it should be.
It lets you design your API, thinking of issues that are actually relevant to that design - carefully considering versioning problems, not forgetting ease of use and conciseness, and providing the degree of decoupling that is relevant to a specific task at hand - with no regard to any limitations the testing framework sets.It's no surprise that some people from the TDD community are hostile towards TypeMock [wordpress.com] because it's "too powerful", and doesn't force the programmer to conform to their vision of "testable code".
But it's rather ironic, anyway, given how TDD itself is by and large an offshoot of Agile, which had always promoted principles such as "do what works" and "make things no more complicated than necessary".
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089842</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090118</id>
	<title>Lol at backward wielding sword samurai</title>
	<author>Anonymous</author>
	<datestamp>1265015700000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Lol at backward wielding sword samurai..</p></htmltext>
<tokenext>Lol at backward wielding sword samurai. .</tokentext>
<sentencetext>Lol at backward wielding sword samurai..</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31094804</id>
	<title>These are not the units you are looking for</title>
	<author>galego</author>
	<datestamp>1265038860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>[waves hand in front of face]</htmltext>
<tokenext>[ waves hand in front of face ]</tokentext>
<sentencetext>[waves hand in front of face]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089894</id>
	<title>Tiring to read</title>
	<author>noidentity</author>
	<datestamp>1265057580000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>I read this book recently and found it tiring. Much of it reads like a blog, and like many books, the author randomly switches stances. He'll refer to the reader as "the reader", "you", "we", and in the third person. This is the kind of book where it's hard to keep a clear idea of what the author is talking about, because he doesn't have a clear idea of what he's trying to communicate.

</p><p>When I think of tiring books like this, I cannot avoid always remembering Steve McConnel's <i>Code Complete</i> (first edition; I haven't looked at the second edition yet). Reading that book is like having your autonomy assaulted, because the author constantly tries to get you to accept the things he's claiming, via whatever means necessary, rather than presenting them along with rational arguments, and letting you decide when to apply them. I'm not saying Osherove's book is that bad, just that it has that same unenjoyable aspect that makes it a chore to read and get useful information from.

</p><p>I recently also read Kent Beck's <i>Test-Driven Development</i> and highly recommend it, if you simply want to learn about unit testing and test-driven development. It's concise and enjoyable to read. Unfortunately it doesn't cover as many details, and I don't have any good alternatives to books like Osherove's (and I've read many at my local large university library).</p></htmltext>
<tokenext>I read this book recently and found it tiring .
Much of it reads like a blog , and like many books , the author randomly switches stances .
He 'll refer to the reader as " the reader " , " you " , " we " , and in the third person .
This is the kind of book where it 's hard to keep a clear idea of what the author is talking about , because he does n't have a clear idea of what he 's trying to communicate .
When I think of tiring books like this , I can not avoid always remembering Steve McConnel 's Code Complete ( first edition ; I have n't looked at the second edition yet ) .
Reading that book is like having your autonomy assaulted , because the author constantly tries to get you to accept the things he 's claiming , via whatever means necessary , rather than presenting them along with rational arguments , and letting you decide when to apply them .
I 'm not saying Osherove 's book is that bad , just that it has that same unenjoyable aspect that makes it a chore to read and get useful information from .
I recently also read Kent Beck 's Test-Driven Development and highly recommend it , if you simply want to learn about unit testing and test-driven development .
It 's concise and enjoyable to read .
Unfortunately it does n't cover as many details , and I do n't have any good alternatives to books like Osherove 's ( and I 've read many at my local large university library ) .</tokentext>
<sentencetext>I read this book recently and found it tiring.
Much of it reads like a blog, and like many books, the author randomly switches stances.
He'll refer to the reader as "the reader", "you", "we", and in the third person.
This is the kind of book where it's hard to keep a clear idea of what the author is talking about, because he doesn't have a clear idea of what he's trying to communicate.
When I think of tiring books like this, I cannot avoid always remembering Steve McConnel's Code Complete (first edition; I haven't looked at the second edition yet).
Reading that book is like having your autonomy assaulted, because the author constantly tries to get you to accept the things he's claiming, via whatever means necessary, rather than presenting them along with rational arguments, and letting you decide when to apply them.
I'm not saying Osherove's book is that bad, just that it has that same unenjoyable aspect that makes it a chore to read and get useful information from.
I recently also read Kent Beck's Test-Driven Development and highly recommend it, if you simply want to learn about unit testing and test-driven development.
It's concise and enjoyable to read.
Unfortunately it doesn't cover as many details, and I don't have any good alternatives to books like Osherove's (and I've read many at my local large university library).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31092768</id>
	<title>Re:xUnit Test Patterns</title>
	<author>Anonymous</author>
	<datestamp>1265028480000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>So you are saying testable code is not fundamentally better because if you only do unit testing you don't do integration testing?</p><p>It's like saying cake isn't good because if you only eat sugar you don't eat bacon.</p></htmltext>
<tokenext>So you are saying testable code is not fundamentally better because if you only do unit testing you do n't do integration testing ? It 's like saying cake is n't good because if you only eat sugar you do n't eat bacon .</tokentext>
<sentencetext>So you are saying testable code is not fundamentally better because if you only do unit testing you don't do integration testing?It's like saying cake isn't good because if you only eat sugar you don't eat bacon.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090192</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089870</id>
	<title>Why when I was a young man in the program . . .</title>
	<author>Tanman</author>
	<datestamp>1265057460000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext><p>When I was a young man in the program, they tested the unit by having us march shoeless through 2 miles of uphill, mine-ridden, barbed-wire-laced snow! The unit got tested, and tested HARD! The program didn't allow for no pansy-ass pussy-footers.  And did the unit in the program pass its tests? By God it did!  You youngsters got it easy just havin to do some stupid vocabulary test to test your unit in the program.  Plugging in words. HAH! Try plugging in the gaping hole left by the bark of an exploding tree!</p></htmltext>
<tokenext>When I was a young man in the program , they tested the unit by having us march shoeless through 2 miles of uphill , mine-ridden , barbed-wire-laced snow !
The unit got tested , and tested HARD !
The program did n't allow for no pansy-ass pussy-footers .
And did the unit in the program pass its tests ?
By God it did !
You youngsters got it easy just havin to do some stupid vocabulary test to test your unit in the program .
Plugging in words .
HAH ! Try plugging in the gaping hole left by the bark of an exploding tree !</tokentext>
<sentencetext>When I was a young man in the program, they tested the unit by having us march shoeless through 2 miles of uphill, mine-ridden, barbed-wire-laced snow!
The unit got tested, and tested HARD!
The program didn't allow for no pansy-ass pussy-footers.
And did the unit in the program pass its tests?
By God it did!
You youngsters got it easy just havin to do some stupid vocabulary test to test your unit in the program.
Plugging in words.
HAH! Try plugging in the gaping hole left by the bark of an exploding tree!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31093212</id>
	<title>Re:Unit testing is not a silver bullet</title>
	<author>geekoid</author>
	<datestamp>1265030640000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>That just means you are horrible at your job and that you think no one else will ever work on it.</p><p>"I find full logging and reliable time synchronization both easier to implement and more useful in tracking bugs and / or design errors in environment I deal with than unit testing."<br>THAT is a separate issue, that you should ALSO do.</p><p>I suspect you have no clue why you should be designing and using unit tests.</p></htmltext>
<tokenext>That just means you are horrible at your job and that you think no one else will ever work on it .
" I find full logging and reliable time synchronization both easier to implement and more useful in tracking bugs and / or design errors in environment I deal with than unit testing .
" THAT is a separate issue , that you should ALSO do.I suspect you have no clue why you should be designing and using unit tests .</tokentext>
<sentencetext>That just means you are horrible at your job and that you think no one else will ever work on it.
"I find full logging and reliable time synchronization both easier to implement and more useful in tracking bugs and / or design errors in environment I deal with than unit testing.
"THAT is a separate issue, that you should ALSO do.I suspect you have no clue why you should be designing and using unit tests.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090310</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31093012</id>
	<title>Re:Does he back up anything he says</title>
	<author>Aladrin</author>
	<datestamp>1265029680000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>I have never seen any scientific studies on it, but I use Unit Testing as a tool to help me code and debug better and it works a LOT better than anything I tried prior to that.  And when I break some of my old code, I know exactly what's breaking with just a glance.</p><p>Also, I have occasionally be charged with massive changes to an existing system, and Unit Testing is the only thing I know of that lets me guarantee the code functions exactly the same before and after for existing uses.</p><p>tl;dr - I don't need a scientific study to tell me a tool is working well for me.</p></htmltext>
<tokenext>I have never seen any scientific studies on it , but I use Unit Testing as a tool to help me code and debug better and it works a LOT better than anything I tried prior to that .
And when I break some of my old code , I know exactly what 's breaking with just a glance.Also , I have occasionally be charged with massive changes to an existing system , and Unit Testing is the only thing I know of that lets me guarantee the code functions exactly the same before and after for existing uses.tl ; dr - I do n't need a scientific study to tell me a tool is working well for me .</tokentext>
<sentencetext>I have never seen any scientific studies on it, but I use Unit Testing as a tool to help me code and debug better and it works a LOT better than anything I tried prior to that.
And when I break some of my old code, I know exactly what's breaking with just a glance.Also, I have occasionally be charged with massive changes to an existing system, and Unit Testing is the only thing I know of that lets me guarantee the code functions exactly the same before and after for existing uses.tl;dr - I don't need a scientific study to tell me a tool is working well for me.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31091002</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090426</id>
	<title>Re:xUnit Test Patterns</title>
	<author>Lunix Nutcase</author>
	<datestamp>1265017620000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p><div class="quote"><p>It's like a car built out of LEGO, sure you can take any piece off and attach it anywhere else, but the problems are not with the individual pieces, but how you put them together.. and you aren't testing that if you're only doing unit testing.</p></div><p>And that's why you do integration testing too.</p></div>
	</htmltext>
<tokenext>It 's like a car built out of LEGO , sure you can take any piece off and attach it anywhere else , but the problems are not with the individual pieces , but how you put them together.. and you are n't testing that if you 're only doing unit testing.And that 's why you do integration testing too .</tokentext>
<sentencetext>It's like a car built out of LEGO, sure you can take any piece off and attach it anywhere else, but the problems are not with the individual pieces, but how you put them together.. and you aren't testing that if you're only doing unit testing.And that's why you do integration testing too.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090192</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31091270</id>
	<title>Re:Error coding...</title>
	<author>shutdown -p now</author>
	<datestamp>1265020980000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>Check the permissions when you access a resource.</p></div><p>Careful, you can easily have a race condition there. Say, you're trying to open a file. You check for permissions before doing so, and find out that everything is fine. Meanwhile, another process in the system does `chmod a-r` on the file - and your following open() call fails, even though the security check just succeeded.</p></div>
	</htmltext>
<tokenext>Check the permissions when you access a resource.Careful , you can easily have a race condition there .
Say , you 're trying to open a file .
You check for permissions before doing so , and find out that everything is fine .
Meanwhile , another process in the system does ` chmod a-r ` on the file - and your following open ( ) call fails , even though the security check just succeeded .</tokentext>
<sentencetext>Check the permissions when you access a resource.Careful, you can easily have a race condition there.
Say, you're trying to open a file.
You check for permissions before doing so, and find out that everything is fine.
Meanwhile, another process in the system does `chmod a-r` on the file - and your following open() call fails, even though the security check just succeeded.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089958</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090318</id>
	<title>Re:xUnit Test Patterns</title>
	<author>MobyDisk</author>
	<datestamp>1265016900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>The idea is not only that automated testing is good, but that testable code is fundamentally better</p></div><p>One of the main goals of of Typemock is to eliminate that.  TypeMock allows you to mock objects that were not designed to be mocked, and are not loosely coupled.</p></div>
	</htmltext>
<tokenext>The idea is not only that automated testing is good , but that testable code is fundamentally betterOne of the main goals of of Typemock is to eliminate that .
TypeMock allows you to mock objects that were not designed to be mocked , and are not loosely coupled .</tokentext>
<sentencetext>The idea is not only that automated testing is good, but that testable code is fundamentally betterOne of the main goals of of Typemock is to eliminate that.
TypeMock allows you to mock objects that were not designed to be mocked, and are not loosely coupled.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089842</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31103718</id>
	<title>I overcame the problems of unit testing</title>
	<author>crovira</author>
	<datestamp>1265921340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>and unit specification early on in my career with a documentation technique which let me specify the order of as well as the limits of the API (whether human or systemic components were involved.)</p><p>My success and income over the years was derived from the work doe in 1983-84, printer in Computer Language Magazine in 1990 and released into the wild in 2007.</p><p>Check out <a href="http://media.libsyn.com/media/msb/msb-0195\_Rovira\_Diagrams\_PDF\_Test.pdf" title="libsyn.com">http://media.libsyn.com/media/msb/msb-0195\_Rovira\_Diagrams\_PDF\_Test.pdf</a> [libsyn.com]</p></htmltext>
<tokenext>and unit specification early on in my career with a documentation technique which let me specify the order of as well as the limits of the API ( whether human or systemic components were involved .
) My success and income over the years was derived from the work doe in 1983-84 , printer in Computer Language Magazine in 1990 and released into the wild in 2007.Check out http : //media.libsyn.com/media/msb/msb-0195 \ _Rovira \ _Diagrams \ _PDF \ _Test.pdf [ libsyn.com ]</tokentext>
<sentencetext>and unit specification early on in my career with a documentation technique which let me specify the order of as well as the limits of the API (whether human or systemic components were involved.
)My success and income over the years was derived from the work doe in 1983-84, printer in Computer Language Magazine in 1990 and released into the wild in 2007.Check out http://media.libsyn.com/media/msb/msb-0195\_Rovira\_Diagrams\_PDF\_Test.pdf [libsyn.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31091624</id>
	<title>Re:Unit testing is not a silver bullet</title>
	<author>scamper\_22</author>
	<datestamp>1265022420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I'll say this much.</p><p>Unit testing has two big uses.<br>1.  it formalizes the testing you do anyways and keeps that test.  Just today, I had to write a tricky regexp to split some logging apart.  I used the unit test just to formalize the testing I'd do anyways (feed in some dummy strings) to verify it works.</p><p>2.  It forces you to write better code.</p><p>2 is a bit flakly... as if someone writes crappy code, unit testing isn't going to make them a better coder.  Yet, it does keep me check.  There are countless times you just want to rush some code in that works well.   Being part of unit testing mindset, you are forced to abstract away file access and database access to support mock objects or stubbing.  It forces me to write more object oriented code.</p><p>I really that 2 is one of the reasons unit testing leads to better code.  It's not the actual testing.  But its the testing that forces you to write your code in a certain way.    It won't make a bad developer a good one, but it will make a good one more consistent.</p></htmltext>
<tokenext>I 'll say this much.Unit testing has two big uses.1 .
it formalizes the testing you do anyways and keeps that test .
Just today , I had to write a tricky regexp to split some logging apart .
I used the unit test just to formalize the testing I 'd do anyways ( feed in some dummy strings ) to verify it works.2 .
It forces you to write better code.2 is a bit flakly... as if someone writes crappy code , unit testing is n't going to make them a better coder .
Yet , it does keep me check .
There are countless times you just want to rush some code in that works well .
Being part of unit testing mindset , you are forced to abstract away file access and database access to support mock objects or stubbing .
It forces me to write more object oriented code.I really that 2 is one of the reasons unit testing leads to better code .
It 's not the actual testing .
But its the testing that forces you to write your code in a certain way .
It wo n't make a bad developer a good one , but it will make a good one more consistent .</tokentext>
<sentencetext>I'll say this much.Unit testing has two big uses.1.
it formalizes the testing you do anyways and keeps that test.
Just today, I had to write a tricky regexp to split some logging apart.
I used the unit test just to formalize the testing I'd do anyways (feed in some dummy strings) to verify it works.2.
It forces you to write better code.2 is a bit flakly... as if someone writes crappy code, unit testing isn't going to make them a better coder.
Yet, it does keep me check.
There are countless times you just want to rush some code in that works well.
Being part of unit testing mindset, you are forced to abstract away file access and database access to support mock objects or stubbing.
It forces me to write more object oriented code.I really that 2 is one of the reasons unit testing leads to better code.
It's not the actual testing.
But its the testing that forces you to write your code in a certain way.
It won't make a bad developer a good one, but it will make a good one more consistent.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090310</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31093258</id>
	<title>Re:Pet peeve - the purpose of testing</title>
	<author>PostPhil</author>
	<datestamp>1265030760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Uh no, it's to demonstrate that the code "works". The problem here is what it means "to work". Part of the usefulness of TDD is that you might not fully understand what it means "to work" yet, and the tests help you flesh that out.</p><p>Let me clarify, so you don't think I'm 100\% ditching what you're saying versus stating it a different way. A test suite will tend to have BOTH tests for what the correct behavior *is* and also tests for what the correct behavior *is not*. In other words, what you're doing is defining the BOUNDARIES between correct and incorrect behavior. You're right in the sense that if your *strategy* is to write only *optimistic* tests (i.e. "proving that it works"), you'll miss subtle areas where the behavior isn't fully clarified (i.e. corner cases).</p><p>But here's the problem: for absolutely anything in the universe, there is an INFINITE number of things something *is not*, but only a finite amount of things something *is*. I've seen people go too crazy with using tests as a way of type-checking everything where smarter data types would have been a better choice, or performing a hundred "this isn't what I want" tests that could have been handled with a single "this IS what I want" test. My point is that you're supposed to program for the correct case, not design as if you always expect everything to go wrong. Write for the correct case, test for the correct cases FIRST, test for the EXCEPTIONAL cases, and write handling code for the things that are exceptional. Don't write an infinite test suite of what something is not.</p><p>CONCLUSION: Write the most EFFECTIVE tests you can that covers the most ground. Don't write *pointless* tests you have to maintain later if there was a better test. If a test covers a lot of logical ground by defining the boundaries of what something *is not*, then write the test for that. If it covers a lot of ground by defining what something *is*, write the test for that.</p></htmltext>
<tokenext>Uh no , it 's to demonstrate that the code " works " .
The problem here is what it means " to work " .
Part of the usefulness of TDD is that you might not fully understand what it means " to work " yet , and the tests help you flesh that out.Let me clarify , so you do n't think I 'm 100 \ % ditching what you 're saying versus stating it a different way .
A test suite will tend to have BOTH tests for what the correct behavior * is * and also tests for what the correct behavior * is not * .
In other words , what you 're doing is defining the BOUNDARIES between correct and incorrect behavior .
You 're right in the sense that if your * strategy * is to write only * optimistic * tests ( i.e .
" proving that it works " ) , you 'll miss subtle areas where the behavior is n't fully clarified ( i.e .
corner cases ) .But here 's the problem : for absolutely anything in the universe , there is an INFINITE number of things something * is not * , but only a finite amount of things something * is * .
I 've seen people go too crazy with using tests as a way of type-checking everything where smarter data types would have been a better choice , or performing a hundred " this is n't what I want " tests that could have been handled with a single " this IS what I want " test .
My point is that you 're supposed to program for the correct case , not design as if you always expect everything to go wrong .
Write for the correct case , test for the correct cases FIRST , test for the EXCEPTIONAL cases , and write handling code for the things that are exceptional .
Do n't write an infinite test suite of what something is not.CONCLUSION : Write the most EFFECTIVE tests you can that covers the most ground .
Do n't write * pointless * tests you have to maintain later if there was a better test .
If a test covers a lot of logical ground by defining the boundaries of what something * is not * , then write the test for that .
If it covers a lot of ground by defining what something * is * , write the test for that .</tokentext>
<sentencetext>Uh no, it's to demonstrate that the code "works".
The problem here is what it means "to work".
Part of the usefulness of TDD is that you might not fully understand what it means "to work" yet, and the tests help you flesh that out.Let me clarify, so you don't think I'm 100\% ditching what you're saying versus stating it a different way.
A test suite will tend to have BOTH tests for what the correct behavior *is* and also tests for what the correct behavior *is not*.
In other words, what you're doing is defining the BOUNDARIES between correct and incorrect behavior.
You're right in the sense that if your *strategy* is to write only *optimistic* tests (i.e.
"proving that it works"), you'll miss subtle areas where the behavior isn't fully clarified (i.e.
corner cases).But here's the problem: for absolutely anything in the universe, there is an INFINITE number of things something *is not*, but only a finite amount of things something *is*.
I've seen people go too crazy with using tests as a way of type-checking everything where smarter data types would have been a better choice, or performing a hundred "this isn't what I want" tests that could have been handled with a single "this IS what I want" test.
My point is that you're supposed to program for the correct case, not design as if you always expect everything to go wrong.
Write for the correct case, test for the correct cases FIRST, test for the EXCEPTIONAL cases, and write handling code for the things that are exceptional.
Don't write an infinite test suite of what something is not.CONCLUSION: Write the most EFFECTIVE tests you can that covers the most ground.
Don't write *pointless* tests you have to maintain later if there was a better test.
If a test covers a lot of logical ground by defining the boundaries of what something *is not*, then write the test for that.
If it covers a lot of ground by defining what something *is*, write the test for that.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090840</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089924</id>
	<title>Read it; Loved it.</title>
	<author>fyrie</author>
	<datestamp>1265057760000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>I'm fairly experienced with unit testing, and I've read several books on the subject. This is by far the best introduction to unit testing I have read. The book, in very practical terms, explains in 300 pages that which took me about five years to learn the hard way. I think this book also has a lot of value for unit testers that got their start a decade or more ago but haven't kept up with recent trends.</htmltext>
<tokenext>I 'm fairly experienced with unit testing , and I 've read several books on the subject .
This is by far the best introduction to unit testing I have read .
The book , in very practical terms , explains in 300 pages that which took me about five years to learn the hard way .
I think this book also has a lot of value for unit testers that got their start a decade or more ago but have n't kept up with recent trends .</tokentext>
<sentencetext>I'm fairly experienced with unit testing, and I've read several books on the subject.
This is by far the best introduction to unit testing I have read.
The book, in very practical terms, explains in 300 pages that which took me about five years to learn the hard way.
I think this book also has a lot of value for unit testers that got their start a decade or more ago but haven't kept up with recent trends.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31095592</id>
	<title>Re:Error coding...</title>
	<author>Cassini2</author>
	<datestamp>1265044620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Actually, I despise programmers that depend on exceptions.  If you are trying to work on any kind of hardened system, like a real-time system, or a system that "just has to work", or a system that must work in fixed memory, exceptions are nightmares.  You have to prove that every single exception either can't happen or can be safely handled.  Even "safe" handling can be a challenge.  For instance, if an error occurs, the control system must keep running, otherwise more errors occur.  In some contingencies, the software must just "keep handling errors".  It is amazing how many functions can't handle a constant stream of errors without leaking memory, pausing for long periods of time, or pausing for operator input when no operator is present.
</p><p>One project involved complex mechanical automation.  The system performance was defined solely by the rate exceptions to the normal process occured.  It was vital to have software that just works.  If the mechanical exceptions caused exceptions in the computer software, then everything stopped working.  It was impossible to keep track of all the complexities involving all the exceptions.
</p><p>The only solution was simple software that had very few error cases, and each error case clearly exposed the error handling.  As such, all error cases could be checked. Curiously, this was the same system that worked much better when ported from<nobr> <wbr></nobr>.NET to C.  For low-level code, C is a much better language, no matter how trendy<nobr> <wbr></nobr>.NET is.</p></htmltext>
<tokenext>Actually , I despise programmers that depend on exceptions .
If you are trying to work on any kind of hardened system , like a real-time system , or a system that " just has to work " , or a system that must work in fixed memory , exceptions are nightmares .
You have to prove that every single exception either ca n't happen or can be safely handled .
Even " safe " handling can be a challenge .
For instance , if an error occurs , the control system must keep running , otherwise more errors occur .
In some contingencies , the software must just " keep handling errors " .
It is amazing how many functions ca n't handle a constant stream of errors without leaking memory , pausing for long periods of time , or pausing for operator input when no operator is present .
One project involved complex mechanical automation .
The system performance was defined solely by the rate exceptions to the normal process occured .
It was vital to have software that just works .
If the mechanical exceptions caused exceptions in the computer software , then everything stopped working .
It was impossible to keep track of all the complexities involving all the exceptions .
The only solution was simple software that had very few error cases , and each error case clearly exposed the error handling .
As such , all error cases could be checked .
Curiously , this was the same system that worked much better when ported from .NET to C. For low-level code , C is a much better language , no matter how trendy .NET is .</tokentext>
<sentencetext>Actually, I despise programmers that depend on exceptions.
If you are trying to work on any kind of hardened system, like a real-time system, or a system that "just has to work", or a system that must work in fixed memory, exceptions are nightmares.
You have to prove that every single exception either can't happen or can be safely handled.
Even "safe" handling can be a challenge.
For instance, if an error occurs, the control system must keep running, otherwise more errors occur.
In some contingencies, the software must just "keep handling errors".
It is amazing how many functions can't handle a constant stream of errors without leaking memory, pausing for long periods of time, or pausing for operator input when no operator is present.
One project involved complex mechanical automation.
The system performance was defined solely by the rate exceptions to the normal process occured.
It was vital to have software that just works.
If the mechanical exceptions caused exceptions in the computer software, then everything stopped working.
It was impossible to keep track of all the complexities involving all the exceptions.
The only solution was simple software that had very few error cases, and each error case clearly exposed the error handling.
As such, all error cases could be checked.
Curiously, this was the same system that worked much better when ported from .NET to C.  For low-level code, C is a much better language, no matter how trendy .NET is.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090370</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090678</id>
	<title>Re:Unit testing is not a silver bullet</title>
	<author>Anonymous</author>
	<datestamp>1265018700000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>Unit tests should be trivial for the majority of classes. Good OO design will cause your many of your classes to be single purpose and simplistic therefore the unit tests will also be simplistic. That's the point of OOD (or even modular design)--breaking down complex problems into many simpler problems*.</p><p>Maybe you should consider that unit testing is not just for validating the current set of objects but also validating that future revisions do not break compatibility. In other words it makes regression testing possible or easier with automation.</p><p>Writing the unit tests also serve to prove to your teammates you've thought about boundary conditions and logic errors. When you're forced to think them in a structured way then you're in a better position to catch code bugs while writing the unit tests. Many times you'll find them before even executing the test code.</p><p>Note: If anyone responds with something along the lines of "complex problems cannot always be simplified" I will literally punch you--repeatedly.</p></htmltext>
<tokenext>Unit tests should be trivial for the majority of classes .
Good OO design will cause your many of your classes to be single purpose and simplistic therefore the unit tests will also be simplistic .
That 's the point of OOD ( or even modular design ) --breaking down complex problems into many simpler problems * .Maybe you should consider that unit testing is not just for validating the current set of objects but also validating that future revisions do not break compatibility .
In other words it makes regression testing possible or easier with automation.Writing the unit tests also serve to prove to your teammates you 've thought about boundary conditions and logic errors .
When you 're forced to think them in a structured way then you 're in a better position to catch code bugs while writing the unit tests .
Many times you 'll find them before even executing the test code.Note : If anyone responds with something along the lines of " complex problems can not always be simplified " I will literally punch you--repeatedly .</tokentext>
<sentencetext>Unit tests should be trivial for the majority of classes.
Good OO design will cause your many of your classes to be single purpose and simplistic therefore the unit tests will also be simplistic.
That's the point of OOD (or even modular design)--breaking down complex problems into many simpler problems*.Maybe you should consider that unit testing is not just for validating the current set of objects but also validating that future revisions do not break compatibility.
In other words it makes regression testing possible or easier with automation.Writing the unit tests also serve to prove to your teammates you've thought about boundary conditions and logic errors.
When you're forced to think them in a structured way then you're in a better position to catch code bugs while writing the unit tests.
Many times you'll find them before even executing the test code.Note: If anyone responds with something along the lines of "complex problems cannot always be simplified" I will literally punch you--repeatedly.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090310</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31097072
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090678
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090310
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31092768
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090192
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089842
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090642
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090310
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31093212
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090310
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31091624
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090310
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089934
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089596
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31095592
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090370
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089958
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31091268
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089888
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31097736
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090678
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090310
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31093160
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090684
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089842
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31096792
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089894
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31093258
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090840
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090352
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089958
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31091078
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089958
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090792
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089958
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31098596
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090310
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31091270
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089958
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31095774
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31091002
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31095046
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089842
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090426
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090192
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089842
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31091182
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089842
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31093776
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090684
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089842
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31093892
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31091002
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31093012
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31091002
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31093416
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090522
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089888
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31098542
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089674
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090318
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089842
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_10_1432247_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31092382
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089958
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_1432247.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090840
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31093258
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_1432247.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089674
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31098542
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_1432247.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089958
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31091078
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090352
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31092382
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31091270
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090792
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090370
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31095592
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_1432247.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089596
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089934
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_1432247.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089888
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090522
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31093416
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31091268
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_1432247.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090532
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_1432247.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31091002
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31095774
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31093892
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31093012
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_1432247.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089842
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090192
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31092768
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090426
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31095046
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090684
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31093776
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31093160
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31091182
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090318
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_1432247.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090310
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31091624
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31093212
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31098596
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090642
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31090678
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31097736
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31097072
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_10_1432247.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31089894
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_10_1432247.31096792
</commentlist>
</conversation>
