<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_12_30_2321238</id>
	<title>The Neuroscience of Screwing Up</title>
	<author>samzenpus</author>
	<datestamp>1262177100000</datestamp>
	<htmltext>resistant writes <i>"As the evocative title from Wired magazine implies, Kevin Dunbar of the University of Toronto has taken an <a href="http://www.wired.com/magazine/2009/12/fail\_accept\_defeat/all/1">in-depth and fascinating look at scientific error</a>, the scientists who cope with it, and sometimes transcend it to find new lines of inquiry. From the article: 'Dunbar came away from his in vivo studies with an unsettling insight: Science is a deeply frustrating pursuit. Although the researchers were mostly using established techniques, more than 50 percent of their data was unexpected. (In some labs, the figure exceeded 75 percent.) "The scientists had these elaborate theories about what was supposed to happen," Dunbar says. "But the results kept contradicting their theories. It wasn't uncommon for someone to spend a month on a project and then just discard all their data because the data didn't make sense."'"</i></htmltext>
<tokenext>resistant writes " As the evocative title from Wired magazine implies , Kevin Dunbar of the University of Toronto has taken an in-depth and fascinating look at scientific error , the scientists who cope with it , and sometimes transcend it to find new lines of inquiry .
From the article : 'Dunbar came away from his in vivo studies with an unsettling insight : Science is a deeply frustrating pursuit .
Although the researchers were mostly using established techniques , more than 50 percent of their data was unexpected .
( In some labs , the figure exceeded 75 percent .
) " The scientists had these elaborate theories about what was supposed to happen , " Dunbar says .
" But the results kept contradicting their theories .
It was n't uncommon for someone to spend a month on a project and then just discard all their data because the data did n't make sense .
" ' "</tokentext>
<sentencetext>resistant writes "As the evocative title from Wired magazine implies, Kevin Dunbar of the University of Toronto has taken an in-depth and fascinating look at scientific error, the scientists who cope with it, and sometimes transcend it to find new lines of inquiry.
From the article: 'Dunbar came away from his in vivo studies with an unsettling insight: Science is a deeply frustrating pursuit.
Although the researchers were mostly using established techniques, more than 50 percent of their data was unexpected.
(In some labs, the figure exceeded 75 percent.
) "The scientists had these elaborate theories about what was supposed to happen," Dunbar says.
"But the results kept contradicting their theories.
It wasn't uncommon for someone to spend a month on a project and then just discard all their data because the data didn't make sense.
"'"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601694</id>
	<title>Re:Ridiculous</title>
	<author>labnet</author>
	<datestamp>1259849640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Often the data is crap, because the measurements are so hard to make.<br>For example, you would think measuring temperature is easy. Not so.<br>Lets say you wish to determine the cooling capacity of an airconditioner.<br>How do you measure the temperature and air velocity gradients across both the return and supply air streams. Do I use 1 sensor, 10 sensors, 100 sensors. Do you create turbulence or laminar flow? How accurate is the humidity measurement?<br>The point is, the data is often crap, because measurements are hard to make, time is limited, can't afford the right equipment, not enough labour, could not fully simulate the enviroment etc etc.</p></htmltext>
<tokenext>Often the data is crap , because the measurements are so hard to make.For example , you would think measuring temperature is easy .
Not so.Lets say you wish to determine the cooling capacity of an airconditioner.How do you measure the temperature and air velocity gradients across both the return and supply air streams .
Do I use 1 sensor , 10 sensors , 100 sensors .
Do you create turbulence or laminar flow ?
How accurate is the humidity measurement ? The point is , the data is often crap , because measurements are hard to make , time is limited , ca n't afford the right equipment , not enough labour , could not fully simulate the enviroment etc etc .</tokentext>
<sentencetext>Often the data is crap, because the measurements are so hard to make.For example, you would think measuring temperature is easy.
Not so.Lets say you wish to determine the cooling capacity of an airconditioner.How do you measure the temperature and air velocity gradients across both the return and supply air streams.
Do I use 1 sensor, 10 sensors, 100 sensors.
Do you create turbulence or laminar flow?
How accurate is the humidity measurement?The point is, the data is often crap, because measurements are hard to make, time is limited, can't afford the right equipment, not enough labour, could not fully simulate the enviroment etc etc.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30603048</id>
	<title>Re:You never discard the data</title>
	<author>fotoguzzi</author>
	<datestamp>1259864220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>An accessible link on defending Millikan: <a href="http://eands.caltech.edu/articles/Millikan\%20Feature.pdf" title="caltech.edu" rel="nofollow">http://eands.caltech.edu/articles/Millikan\%20Feature.pdf</a> [caltech.edu]</htmltext>
<tokenext>An accessible link on defending Millikan : http : //eands.caltech.edu/articles/Millikan \ % 20Feature.pdf [ caltech.edu ]</tokentext>
<sentencetext>An accessible link on defending Millikan: http://eands.caltech.edu/articles/Millikan\%20Feature.pdf [caltech.edu]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602892</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602108</id>
	<title>Re:Ridiculous</title>
	<author>Anonymous</author>
	<datestamp>1259853240000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext><p><div class="quote"><p>The scientific process is bullet proof. The folks who "do science" not necessarily so.</p></div><p>What exactly are you advocating?</p></div>
	</htmltext>
<tokenext>The scientific process is bullet proof .
The folks who " do science " not necessarily so.What exactly are you advocating ?</tokentext>
<sentencetext>The scientific process is bullet proof.
The folks who "do science" not necessarily so.What exactly are you advocating?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601648</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602592</id>
	<title>Re:Good!</title>
	<author>Thing 1</author>
	<datestamp>1259857980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>[...] preventing others from going down the same path with the same methodology is still highly valuable!</p></div>
</blockquote><p>Exactly.  Thomas Edison "discovered" over 5,000 ways how <b>not</b> to create a light bulb.  Had he published each and every one of them, perhaps the light bulb would have been invented sooner -- perhaps by someone else, or perhaps by him, collaborating with someone else who had read his published accounts of "how not to create a light bulb."</p></div>
	</htmltext>
<tokenext>[ ... ] preventing others from going down the same path with the same methodology is still highly valuable !
Exactly. Thomas Edison " discovered " over 5,000 ways how not to create a light bulb .
Had he published each and every one of them , perhaps the light bulb would have been invented sooner -- perhaps by someone else , or perhaps by him , collaborating with someone else who had read his published accounts of " how not to create a light bulb .
"</tokentext>
<sentencetext>[...] preventing others from going down the same path with the same methodology is still highly valuable!
Exactly.  Thomas Edison "discovered" over 5,000 ways how not to create a light bulb.
Had he published each and every one of them, perhaps the light bulb would have been invented sooner -- perhaps by someone else, or perhaps by him, collaborating with someone else who had read his published accounts of "how not to create a light bulb.
"
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601662</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602524</id>
	<title>Re:Ridiculous</title>
	<author>Arancaytar</author>
	<datestamp>1259857380000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>It is possible to end up with crap data because the premise of your experiment is wrong. You can ignore a variable that should have been controlled or kept equal, or you can measure the wrong variable.</p><p>You can also end up with data that neither confirms nor denies your hypothesis, because it allows no statistically significant conclusion.</p></htmltext>
<tokenext>It is possible to end up with crap data because the premise of your experiment is wrong .
You can ignore a variable that should have been controlled or kept equal , or you can measure the wrong variable.You can also end up with data that neither confirms nor denies your hypothesis , because it allows no statistically significant conclusion .</tokentext>
<sentencetext>It is possible to end up with crap data because the premise of your experiment is wrong.
You can ignore a variable that should have been controlled or kept equal, or you can measure the wrong variable.You can also end up with data that neither confirms nor denies your hypothesis, because it allows no statistically significant conclusion.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601656</id>
	<title>Throw away data?</title>
	<author>Anonymous</author>
	<datestamp>1259849340000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Proving a theory incorrect is often just as valuable as proving a theory correct.</p></htmltext>
<tokenext>Proving a theory incorrect is often just as valuable as proving a theory correct .</tokentext>
<sentencetext>Proving a theory incorrect is often just as valuable as proving a theory correct.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602892</id>
	<title>Re:You never discard the data</title>
	<author>fotoguzzi</author>
	<datestamp>1259861940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><i>A good example from physics is the Millikan oil-drop experiment, where he threw out all the data that didn't fit what he was trying to prove -- but then claimed in his paper that he'd never thrown out any data.</i> <br>
<br>
Another take on Millikan: <a href="http://www.americanscientist.org/my\_amsci/restricted.aspx?act=pdf&amp;id=2706085559588" title="americanscientist.org" rel="nofollow">http://www.americanscientist.org/my\_amsci/restricted.aspx?act=pdf&amp;id=2706085559588</a> [americanscientist.org]</htmltext>
<tokenext>A good example from physics is the Millikan oil-drop experiment , where he threw out all the data that did n't fit what he was trying to prove -- but then claimed in his paper that he 'd never thrown out any data .
Another take on Millikan : http : //www.americanscientist.org/my \ _amsci/restricted.aspx ? act = pdf&amp;id = 2706085559588 [ americanscientist.org ]</tokentext>
<sentencetext>A good example from physics is the Millikan oil-drop experiment, where he threw out all the data that didn't fit what he was trying to prove -- but then claimed in his paper that he'd never thrown out any data.
Another take on Millikan: http://www.americanscientist.org/my\_amsci/restricted.aspx?act=pdf&amp;id=2706085559588 [americanscientist.org]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601872</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602232</id>
	<title>Seconded.</title>
	<author>Estanislao Martínez</author>
	<datestamp>1259854680000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><blockquote><div><blockquote><div><p>If the data don't make sense according to your theory, you don't discard the data, you discard the theory and work out a new one that fits the facts as you've observed them.  TFA says that Dunbar was watching postdocs doing research, and if so, they should have known better. Alas, too many people who call themselves scientists are more interested in proving their pet theory true than in finding out what's actually going on.</p></div></blockquote><p>This is a beautiful explanation of how science is supposed to work. In reality, science doesn't really work this way. It doesn't work this way in my experience as a scientist, and it doesn't work this way if you read the history of science.</p></div></blockquote><p>Indeed.  The sort of thing being discussed in TFA is one of the classic themes of late 20th century philosophy and history of science: the disconnect between traditional philosophy of science and the actual practice of science.

</p><p>Kuhn's <i>Structure of Scientific Revolutions</i> is a good place to start.  Just one tiny example of the book: Kuhn goes on about how during normal science, scientists perform experiments to confirm the results that they expect to get.  When an experiment contradicts the theory, they don't automatically assume that the theory is wrong; on the other hand, they assume that the experiment was flawed.

</p><p>Feyerabend and many other philosophers of science take a complementary stand to this by stressing the theory-ladenness of "facts."  The claim that the "facts" contradict a hypothesis is never a theory-independent observation, but rather, the conclusion of a <b>different theory</b> that we may overthrow.  Feyerabend's classic example is the Tower Argument that Aristotle used to refute the theory that the Earth moves.  <a href="http://en.wikipedia.org/wiki/Paul\_Feyerabend" title="wikipedia.org" rel="nofollow">Wikipedia's article on Paul Feyerabend</a> [wikipedia.org] has a decent, if terse, explanation of this:</p><blockquote><div><p>"The tower argument was one of the main objections against the theory of a moving earth. Aristotelians assumed that the fact that a stone which is dropped from a tower lands directly beneath it shows that the earth is stationary. They thought that, if the earth moved while the stone was falling, the stone would have been "left behind". Objects would fall diagonally instead of vertically. Since this does not happen, Aristotelians thought that it was evident that the earth did not move. If one uses ancient theories of impulse and relative motion, the Copernican theory indeed appears to be falsified by the fact that objects fall vertically on earth."</p></div></blockquote><p>Feyerabend goes on to argue that many of our most successful contemporary scientific theories (e.g., heliocentrism and geodynamicism) became so because their Renaissance and Enlightenment proponents held on to them and continued to elaborate on them  <b>despite them being contradicted by "the facts," as judged by the application of  theories that were better established at the time</b> (e.g., Aristotelian mechanics).  That is, new scientific theories often succeed because their proponents keep working on them and improving them despite being contradicting by the "facts"; then as the new theories become stronger and better accepted, people start juding the "facts" by the lens of the new instead of the old, and forget the problems that the new theories were judged to have and never resolved (e.g., things like Newtonian physics not having the same explanatory range as Aristotelian physics).</p></div>
	</htmltext>
<tokenext>If the data do n't make sense according to your theory , you do n't discard the data , you discard the theory and work out a new one that fits the facts as you 've observed them .
TFA says that Dunbar was watching postdocs doing research , and if so , they should have known better .
Alas , too many people who call themselves scientists are more interested in proving their pet theory true than in finding out what 's actually going on.This is a beautiful explanation of how science is supposed to work .
In reality , science does n't really work this way .
It does n't work this way in my experience as a scientist , and it does n't work this way if you read the history of science.Indeed .
The sort of thing being discussed in TFA is one of the classic themes of late 20th century philosophy and history of science : the disconnect between traditional philosophy of science and the actual practice of science .
Kuhn 's Structure of Scientific Revolutions is a good place to start .
Just one tiny example of the book : Kuhn goes on about how during normal science , scientists perform experiments to confirm the results that they expect to get .
When an experiment contradicts the theory , they do n't automatically assume that the theory is wrong ; on the other hand , they assume that the experiment was flawed .
Feyerabend and many other philosophers of science take a complementary stand to this by stressing the theory-ladenness of " facts .
" The claim that the " facts " contradict a hypothesis is never a theory-independent observation , but rather , the conclusion of a different theory that we may overthrow .
Feyerabend 's classic example is the Tower Argument that Aristotle used to refute the theory that the Earth moves .
Wikipedia 's article on Paul Feyerabend [ wikipedia.org ] has a decent , if terse , explanation of this : " The tower argument was one of the main objections against the theory of a moving earth .
Aristotelians assumed that the fact that a stone which is dropped from a tower lands directly beneath it shows that the earth is stationary .
They thought that , if the earth moved while the stone was falling , the stone would have been " left behind " .
Objects would fall diagonally instead of vertically .
Since this does not happen , Aristotelians thought that it was evident that the earth did not move .
If one uses ancient theories of impulse and relative motion , the Copernican theory indeed appears to be falsified by the fact that objects fall vertically on earth .
" Feyerabend goes on to argue that many of our most successful contemporary scientific theories ( e.g. , heliocentrism and geodynamicism ) became so because their Renaissance and Enlightenment proponents held on to them and continued to elaborate on them despite them being contradicted by " the facts , " as judged by the application of theories that were better established at the time ( e.g. , Aristotelian mechanics ) .
That is , new scientific theories often succeed because their proponents keep working on them and improving them despite being contradicting by the " facts " ; then as the new theories become stronger and better accepted , people start juding the " facts " by the lens of the new instead of the old , and forget the problems that the new theories were judged to have and never resolved ( e.g. , things like Newtonian physics not having the same explanatory range as Aristotelian physics ) .</tokentext>
<sentencetext>If the data don't make sense according to your theory, you don't discard the data, you discard the theory and work out a new one that fits the facts as you've observed them.
TFA says that Dunbar was watching postdocs doing research, and if so, they should have known better.
Alas, too many people who call themselves scientists are more interested in proving their pet theory true than in finding out what's actually going on.This is a beautiful explanation of how science is supposed to work.
In reality, science doesn't really work this way.
It doesn't work this way in my experience as a scientist, and it doesn't work this way if you read the history of science.Indeed.
The sort of thing being discussed in TFA is one of the classic themes of late 20th century philosophy and history of science: the disconnect between traditional philosophy of science and the actual practice of science.
Kuhn's Structure of Scientific Revolutions is a good place to start.
Just one tiny example of the book: Kuhn goes on about how during normal science, scientists perform experiments to confirm the results that they expect to get.
When an experiment contradicts the theory, they don't automatically assume that the theory is wrong; on the other hand, they assume that the experiment was flawed.
Feyerabend and many other philosophers of science take a complementary stand to this by stressing the theory-ladenness of "facts.
"  The claim that the "facts" contradict a hypothesis is never a theory-independent observation, but rather, the conclusion of a different theory that we may overthrow.
Feyerabend's classic example is the Tower Argument that Aristotle used to refute the theory that the Earth moves.
Wikipedia's article on Paul Feyerabend [wikipedia.org] has a decent, if terse, explanation of this:"The tower argument was one of the main objections against the theory of a moving earth.
Aristotelians assumed that the fact that a stone which is dropped from a tower lands directly beneath it shows that the earth is stationary.
They thought that, if the earth moved while the stone was falling, the stone would have been "left behind".
Objects would fall diagonally instead of vertically.
Since this does not happen, Aristotelians thought that it was evident that the earth did not move.
If one uses ancient theories of impulse and relative motion, the Copernican theory indeed appears to be falsified by the fact that objects fall vertically on earth.
"Feyerabend goes on to argue that many of our most successful contemporary scientific theories (e.g., heliocentrism and geodynamicism) became so because their Renaissance and Enlightenment proponents held on to them and continued to elaborate on them  despite them being contradicted by "the facts," as judged by the application of  theories that were better established at the time (e.g., Aristotelian mechanics).
That is, new scientific theories often succeed because their proponents keep working on them and improving them despite being contradicting by the "facts"; then as the new theories become stronger and better accepted, people start juding the "facts" by the lens of the new instead of the old, and forget the problems that the new theories were judged to have and never resolved (e.g., things like Newtonian physics not having the same explanatory range as Aristotelian physics).
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601872</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30603556</id>
	<title>Re:The problem... (maybe?)</title>
	<author>Anonymous</author>
	<datestamp>1262290500000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>This is a fallacy of composition.</p><p>Kepler deduced important facts about planetary motion without understanding that gravity was a force or that planets were attracted to the sun.</p><p>Newton understood the interactions of forces and bodies, though he knew nothing of the atoms that made them up, or even the underlying structures of the universe which gave them force.</p><p>Darwin understood the overarching concept  of the evolution of species, though he had no concept of what a gene was, what DNA was, or how cells copied themselves.</p><p>We don't need to understand the "low-end" of the system to understand to some extent its "high end" features. That's a bit too much reductionism.</p></htmltext>
<tokenext>This is a fallacy of composition.Kepler deduced important facts about planetary motion without understanding that gravity was a force or that planets were attracted to the sun.Newton understood the interactions of forces and bodies , though he knew nothing of the atoms that made them up , or even the underlying structures of the universe which gave them force.Darwin understood the overarching concept of the evolution of species , though he had no concept of what a gene was , what DNA was , or how cells copied themselves.We do n't need to understand the " low-end " of the system to understand to some extent its " high end " features .
That 's a bit too much reductionism .</tokentext>
<sentencetext>This is a fallacy of composition.Kepler deduced important facts about planetary motion without understanding that gravity was a force or that planets were attracted to the sun.Newton understood the interactions of forces and bodies, though he knew nothing of the atoms that made them up, or even the underlying structures of the universe which gave them force.Darwin understood the overarching concept  of the evolution of species, though he had no concept of what a gene was, what DNA was, or how cells copied themselves.We don't need to understand the "low-end" of the system to understand to some extent its "high end" features.
That's a bit too much reductionism.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601988</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602362</id>
	<title>Re:You never discard the data</title>
	<author>Anonymous</author>
	<datestamp>1259856000000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Recovering from conflicting data often (though not always) requires discarding bad data, because things go wrong---samples get contaminated, measurement apparatuses get miscalibrated, the planets don't line up just right (yes, I'm kidding---except maybe if it's some kind of astrophysics experiment...), etc. In those situations the conflicting data are simply invalid, being taken from a (necessarily) flawed apparatus/environment, and it's perfectly reasonable to discard them (or more likely, label them "bad", keep them somewhere out of the way, but not publish them), make any fixes you can, and try again.</p><p>The question is how to recognise when your conflicting data are actually real. This is what takes so much time and effort to eventually determine, and while it does take longer, compared to confirmation of existing theories, science inevitably gets there eventually.</p></htmltext>
<tokenext>Recovering from conflicting data often ( though not always ) requires discarding bad data , because things go wrong---samples get contaminated , measurement apparatuses get miscalibrated , the planets do n't line up just right ( yes , I 'm kidding---except maybe if it 's some kind of astrophysics experiment... ) , etc .
In those situations the conflicting data are simply invalid , being taken from a ( necessarily ) flawed apparatus/environment , and it 's perfectly reasonable to discard them ( or more likely , label them " bad " , keep them somewhere out of the way , but not publish them ) , make any fixes you can , and try again.The question is how to recognise when your conflicting data are actually real .
This is what takes so much time and effort to eventually determine , and while it does take longer , compared to confirmation of existing theories , science inevitably gets there eventually .</tokentext>
<sentencetext>Recovering from conflicting data often (though not always) requires discarding bad data, because things go wrong---samples get contaminated, measurement apparatuses get miscalibrated, the planets don't line up just right (yes, I'm kidding---except maybe if it's some kind of astrophysics experiment...), etc.
In those situations the conflicting data are simply invalid, being taken from a (necessarily) flawed apparatus/environment, and it's perfectly reasonable to discard them (or more likely, label them "bad", keep them somewhere out of the way, but not publish them), make any fixes you can, and try again.The question is how to recognise when your conflicting data are actually real.
This is what takes so much time and effort to eventually determine, and while it does take longer, compared to confirmation of existing theories, science inevitably gets there eventually.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601768</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602970</id>
	<title>Re:Ridiculous</title>
	<author>ArcherB</author>
	<datestamp>1259863200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p><div class="quote"><p>The scientific process is bullet proof. The folks who "do science" not necessarily so.</p></div><p>What exactly are you advocating?</p></div><p>Maybe he is saying that as long as scientists stand behind science, they'll be fine when the bullets start flying.  When they ignore science and try to do their own thing, they get a cap in their asses.</p></div>
	</htmltext>
<tokenext>The scientific process is bullet proof .
The folks who " do science " not necessarily so.What exactly are you advocating ? Maybe he is saying that as long as scientists stand behind science , they 'll be fine when the bullets start flying .
When they ignore science and try to do their own thing , they get a cap in their asses .</tokentext>
<sentencetext>The scientific process is bullet proof.
The folks who "do science" not necessarily so.What exactly are you advocating?Maybe he is saying that as long as scientists stand behind science, they'll be fine when the bullets start flying.
When they ignore science and try to do their own thing, they get a cap in their asses.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602108</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601594</id>
	<title>obligatory (insert famous movie here) comment</title>
	<author>Anonymous</author>
	<datestamp>1259848620000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Question *Everything*.</p></htmltext>
<tokenext>Question * Everything * .</tokentext>
<sentencetext>Question *Everything*.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601974</id>
	<title>Re:Ridiculous</title>
	<author>Anonymous</author>
	<datestamp>1259852040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If there's nothing wrong with your equipment or procedures and your experiment fails to find X then it's indicative that your hypothesis is wrong and any group that repeats your experiment is going to know about it immediately.</p></htmltext>
<tokenext>If there 's nothing wrong with your equipment or procedures and your experiment fails to find X then it 's indicative that your hypothesis is wrong and any group that repeats your experiment is going to know about it immediately .</tokentext>
<sentencetext>If there's nothing wrong with your equipment or procedures and your experiment fails to find X then it's indicative that your hypothesis is wrong and any group that repeats your experiment is going to know about it immediately.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601648</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601542</id>
	<title>Sometimes screwing up leads to success ...</title>
	<author>Anonymous</author>
	<datestamp>1259848260000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext>The WIRED piece threads what is written in the summary around the story of how <a href="http://en.wikipedia.org/wiki/Discovery\_of\_cosmic\_microwave\_background\_radiation" title="wikipedia.org">Arno Penzias and Robert Wilson at Bell Labs discovered Cosmic Radiation</a> [wikipedia.org] after being puzzled for a year about background noise on their radio telescopes<nobr> <wbr></nobr>... even scraping pigeon poop off their gear as a possible source until they realized the signal was real - <a href="http://www.komar.org/cgi-bin/christmas\_webcam" title="komar.org">Homer Simpson would have said D'OH!<nobr> <wbr></nobr>;-)</a> [komar.org]</htmltext>
<tokenext>The WIRED piece threads what is written in the summary around the story of how Arno Penzias and Robert Wilson at Bell Labs discovered Cosmic Radiation [ wikipedia.org ] after being puzzled for a year about background noise on their radio telescopes ... even scraping pigeon poop off their gear as a possible source until they realized the signal was real - Homer Simpson would have said D'OH !
; - ) [ komar.org ]</tokentext>
<sentencetext>The WIRED piece threads what is written in the summary around the story of how Arno Penzias and Robert Wilson at Bell Labs discovered Cosmic Radiation [wikipedia.org] after being puzzled for a year about background noise on their radio telescopes ... even scraping pigeon poop off their gear as a possible source until they realized the signal was real - Homer Simpson would have said D'OH!
;-) [komar.org]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602228</id>
	<title>Re:You never discard the data</title>
	<author>rkfig</author>
	<datestamp>1259854680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Wow.  Well spoken, good information, and level headed.  If I hadn't noticed the user ID, I would say that you are new here.  Thanks for the input.</htmltext>
<tokenext>Wow .
Well spoken , good information , and level headed .
If I had n't noticed the user ID , I would say that you are new here .
Thanks for the input .</tokentext>
<sentencetext>Wow.
Well spoken, good information, and level headed.
If I hadn't noticed the user ID, I would say that you are new here.
Thanks for the input.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601872</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602250</id>
	<title>Re:Ridiculous</title>
	<author>RobertF</author>
	<datestamp>1259854860000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext>Anyone who has spent time working in a lab knows that not all data is equal.  You can get useless results if something isn't quite as clean as necessary, or perhaps you were in a bit of a rush and didn't connect everything perfectly.  Any interesting experiment usually has numerous points at which humans can mess things up.  Errant data is usually a sign that you have improperly set up the experiment, so you'll spend most of your times reviewing and fixing procedures until you get what you expected.</htmltext>
<tokenext>Anyone who has spent time working in a lab knows that not all data is equal .
You can get useless results if something is n't quite as clean as necessary , or perhaps you were in a bit of a rush and did n't connect everything perfectly .
Any interesting experiment usually has numerous points at which humans can mess things up .
Errant data is usually a sign that you have improperly set up the experiment , so you 'll spend most of your times reviewing and fixing procedures until you get what you expected .</tokentext>
<sentencetext>Anyone who has spent time working in a lab knows that not all data is equal.
You can get useless results if something isn't quite as clean as necessary, or perhaps you were in a bit of a rush and didn't connect everything perfectly.
Any interesting experiment usually has numerous points at which humans can mess things up.
Errant data is usually a sign that you have improperly set up the experiment, so you'll spend most of your times reviewing and fixing procedures until you get what you expected.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602694</id>
	<title>Experimental error</title>
	<author>dbIII</author>
	<datestamp>1259859420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>It depends.  If most of your data is noise it's fairly worthless anyway and you are better off trying to limit the sources of error and try again.<br>For example consider seismic data.  You've got 50Hz or thereabouts induced in the cables near powerlines, you have wind blowing on the geophones, passing cars or trains, differences in soil above the rock and other sources of noise.  A lot of seismic data processing seems to be about throwing away the noisy data and stacking up what is left to limit the effect of noise even furthur.<br>For other things there are different sources of error which may not be obvious.  It's tempting to think it really is 27.23 Celcius becuase the digital thermometer say so, but the little semiconductor measuring probe may be out a full half a degree or more even if it does spit out numbers that fool people into thinking it is more accurate.  Sure enough ten seconds later it could be telling you it is 26.8 Celcius when nothing has changed.<blockquote><div><p>Alas, too many people who call themselves scientists are more interested in proving their pet theory true than in finding out what's actually going on.</p></div></blockquote><p>If what is actually going on is that a train went past when the reading were taken or if the mains power had a minor spike then nobody really cares.  It can take a while to set up a good experiment or set of measurements and some of the initial information collected may be rubbish.  I've had bits of mid range steel tested where the results came back with large amounts of tungsten - and instead of compiling some theory about how it got there I've told the lab to kick the new kid off the machine, clean the electrodes and spark test it again.</p></div>
	</htmltext>
<tokenext>It depends .
If most of your data is noise it 's fairly worthless anyway and you are better off trying to limit the sources of error and try again.For example consider seismic data .
You 've got 50Hz or thereabouts induced in the cables near powerlines , you have wind blowing on the geophones , passing cars or trains , differences in soil above the rock and other sources of noise .
A lot of seismic data processing seems to be about throwing away the noisy data and stacking up what is left to limit the effect of noise even furthur.For other things there are different sources of error which may not be obvious .
It 's tempting to think it really is 27.23 Celcius becuase the digital thermometer say so , but the little semiconductor measuring probe may be out a full half a degree or more even if it does spit out numbers that fool people into thinking it is more accurate .
Sure enough ten seconds later it could be telling you it is 26.8 Celcius when nothing has changed.Alas , too many people who call themselves scientists are more interested in proving their pet theory true than in finding out what 's actually going on.If what is actually going on is that a train went past when the reading were taken or if the mains power had a minor spike then nobody really cares .
It can take a while to set up a good experiment or set of measurements and some of the initial information collected may be rubbish .
I 've had bits of mid range steel tested where the results came back with large amounts of tungsten - and instead of compiling some theory about how it got there I 've told the lab to kick the new kid off the machine , clean the electrodes and spark test it again .</tokentext>
<sentencetext>It depends.
If most of your data is noise it's fairly worthless anyway and you are better off trying to limit the sources of error and try again.For example consider seismic data.
You've got 50Hz or thereabouts induced in the cables near powerlines, you have wind blowing on the geophones, passing cars or trains, differences in soil above the rock and other sources of noise.
A lot of seismic data processing seems to be about throwing away the noisy data and stacking up what is left to limit the effect of noise even furthur.For other things there are different sources of error which may not be obvious.
It's tempting to think it really is 27.23 Celcius becuase the digital thermometer say so, but the little semiconductor measuring probe may be out a full half a degree or more even if it does spit out numbers that fool people into thinking it is more accurate.
Sure enough ten seconds later it could be telling you it is 26.8 Celcius when nothing has changed.Alas, too many people who call themselves scientists are more interested in proving their pet theory true than in finding out what's actually going on.If what is actually going on is that a train went past when the reading were taken or if the mains power had a minor spike then nobody really cares.
It can take a while to set up a good experiment or set of measurements and some of the initial information collected may be rubbish.
I've had bits of mid range steel tested where the results came back with large amounts of tungsten - and instead of compiling some theory about how it got there I've told the lab to kick the new kid off the machine, clean the electrodes and spark test it again.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601852</id>
	<title>It's called a "hard pivot" by politicians</title>
	<author>Anonymous</author>
	<datestamp>1259850720000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><p>When you come up with a stinker of a health care plan that causes <a href="http://www.cbsnews.com/blogs/2009/12/29/politics/politicalhotsheet/entry6035608.shtml" title="cbsnews.com" rel="nofollow">your 60th vote in the Senate to wind up 31 points behind the opposition</a> [cbsnews.com], you call your fuck-up a <a href="http://www.politico.com/news/stories/1209/30925.html" title="politico.com" rel="nofollow">hard pivot</a> [politico.com].</p></htmltext>
<tokenext>When you come up with a stinker of a health care plan that causes your 60th vote in the Senate to wind up 31 points behind the opposition [ cbsnews.com ] , you call your fuck-up a hard pivot [ politico.com ] .</tokentext>
<sentencetext>When you come up with a stinker of a health care plan that causes your 60th vote in the Senate to wind up 31 points behind the opposition [cbsnews.com], you call your fuck-up a hard pivot [politico.com].</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602438</id>
	<title>Re:Ridiculous</title>
	<author>Anonymous</author>
	<datestamp>1259856600000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>It makes no difference unless you can get EVERYONE ELSE to agree to throw out each of their 99 times as well.</p><p>Until then, you're just a Crackpot.</p></htmltext>
<tokenext>It makes no difference unless you can get EVERYONE ELSE to agree to throw out each of their 99 times as well.Until then , you 're just a Crackpot .</tokentext>
<sentencetext>It makes no difference unless you can get EVERYONE ELSE to agree to throw out each of their 99 times as well.Until then, you're just a Crackpot.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601648</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566</id>
	<title>Ridiculous</title>
	<author>MrMista\_B</author>
	<datestamp>1259848440000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>"It wasn't uncommon for someone to spend a month on a project and then just discard all their data because the data didn't make sense."</p><p>That doesn't mean the data is wrong, it means the<nobr> <wbr></nobr>/hypothesis/ was wrong, if not the theory, and needs to be modified.</p><p>If they're really throwing out date just because it 'doesn't make sense', they're doing religion, not science.</p></htmltext>
<tokenext>" It was n't uncommon for someone to spend a month on a project and then just discard all their data because the data did n't make sense .
" That does n't mean the data is wrong , it means the /hypothesis/ was wrong , if not the theory , and needs to be modified.If they 're really throwing out date just because it 'does n't make sense ' , they 're doing religion , not science .</tokentext>
<sentencetext>"It wasn't uncommon for someone to spend a month on a project and then just discard all their data because the data didn't make sense.
"That doesn't mean the data is wrong, it means the /hypothesis/ was wrong, if not the theory, and needs to be modified.If they're really throwing out date just because it 'doesn't make sense', they're doing religion, not science.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30604956</id>
	<title>Re:You never discard the data</title>
	<author>nine-times</author>
	<datestamp>1262273460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Have you ever done any kind of experiment?  I remember back in college doing a bunch of relatively simple stuff, like testing the rate of acceleration due to gravity.  Half the class got bad data.
</p><p>Sure, they were just students, but these were really simple experiments, designed with full knowledge of what the results should be, and designed specifically to illustrate scientific principles to students.  People were careful, the equipment was adequate, and still they got bad data.  Imagine how much harder it would be if you don't really know what your results should be, and you're making up the experiment without even being sure that your experiment will be capable of testing what you want it to test.
</p><p>I bet people get crap data all the time.  If you get data that's all over the place, no apparent pattern, then there isn't necessarily a theory to work out.  It probably means that your experiment is bad or you performed it badly.  Even if the experiment was good, it means there's probably not a strong connection between the variables your testing and your results, or else there are other variables you're not controlling for.  It's kind of back to the drawing board at that point.
</p><p>Now if the results weren't random, but were consistently showing a trend contrary to the established theory, then scientists shouldn't toss that data.  I doubt that's what we're talking about here, though.</p></htmltext>
<tokenext>Have you ever done any kind of experiment ?
I remember back in college doing a bunch of relatively simple stuff , like testing the rate of acceleration due to gravity .
Half the class got bad data .
Sure , they were just students , but these were really simple experiments , designed with full knowledge of what the results should be , and designed specifically to illustrate scientific principles to students .
People were careful , the equipment was adequate , and still they got bad data .
Imagine how much harder it would be if you do n't really know what your results should be , and you 're making up the experiment without even being sure that your experiment will be capable of testing what you want it to test .
I bet people get crap data all the time .
If you get data that 's all over the place , no apparent pattern , then there is n't necessarily a theory to work out .
It probably means that your experiment is bad or you performed it badly .
Even if the experiment was good , it means there 's probably not a strong connection between the variables your testing and your results , or else there are other variables you 're not controlling for .
It 's kind of back to the drawing board at that point .
Now if the results were n't random , but were consistently showing a trend contrary to the established theory , then scientists should n't toss that data .
I doubt that 's what we 're talking about here , though .</tokentext>
<sentencetext>Have you ever done any kind of experiment?
I remember back in college doing a bunch of relatively simple stuff, like testing the rate of acceleration due to gravity.
Half the class got bad data.
Sure, they were just students, but these were really simple experiments, designed with full knowledge of what the results should be, and designed specifically to illustrate scientific principles to students.
People were careful, the equipment was adequate, and still they got bad data.
Imagine how much harder it would be if you don't really know what your results should be, and you're making up the experiment without even being sure that your experiment will be capable of testing what you want it to test.
I bet people get crap data all the time.
If you get data that's all over the place, no apparent pattern, then there isn't necessarily a theory to work out.
It probably means that your experiment is bad or you performed it badly.
Even if the experiment was good, it means there's probably not a strong connection between the variables your testing and your results, or else there are other variables you're not controlling for.
It's kind of back to the drawing board at that point.
Now if the results weren't random, but were consistently showing a trend contrary to the established theory, then scientists shouldn't toss that data.
I doubt that's what we're talking about here, though.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602568</id>
	<title>Re:Ridiculous</title>
	<author>Anonymous</author>
	<datestamp>1259857740000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I'm a scientist. Throwing out data is not good practice. Instead, some view 'weird' data as a great opportunity for further theorizing, which is probably how it should be considered.</p><p>Nonetheless, under some conditions I could understand throwing out data. When there's too much of it, you need to focus analysis on that of which you can make sense. If nothing makes sense, then there's nothing to contradict or confirm.</p></htmltext>
<tokenext>I 'm a scientist .
Throwing out data is not good practice .
Instead , some view 'weird ' data as a great opportunity for further theorizing , which is probably how it should be considered.Nonetheless , under some conditions I could understand throwing out data .
When there 's too much of it , you need to focus analysis on that of which you can make sense .
If nothing makes sense , then there 's nothing to contradict or confirm .</tokentext>
<sentencetext>I'm a scientist.
Throwing out data is not good practice.
Instead, some view 'weird' data as a great opportunity for further theorizing, which is probably how it should be considered.Nonetheless, under some conditions I could understand throwing out data.
When there's too much of it, you need to focus analysis on that of which you can make sense.
If nothing makes sense, then there's nothing to contradict or confirm.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601902</id>
	<title>Better than discarding data</title>
	<author>Anonymous</author>
	<datestamp>1259851140000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>"It wasn't uncommon for someone to spend a month on a project and then just discard all their data because the data didn't make sense."</p><p>No need to discard perfectly good data when all you need to do is adjust it a little. Don't they know about <a href="http://wattsupwiththat.com/2009/11/20/mikes-nature-trick/" title="wattsupwiththat.com" rel="nofollow">Mike&rsquo;s Nature trick</a> [wattsupwiththat.com]?</p></htmltext>
<tokenext>" It was n't uncommon for someone to spend a month on a project and then just discard all their data because the data did n't make sense .
" No need to discard perfectly good data when all you need to do is adjust it a little .
Do n't they know about Mike    s Nature trick [ wattsupwiththat.com ] ?</tokentext>
<sentencetext>"It wasn't uncommon for someone to spend a month on a project and then just discard all their data because the data didn't make sense.
"No need to discard perfectly good data when all you need to do is adjust it a little.
Don't they know about Mike’s Nature trick [wattsupwiththat.com]?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601784</id>
	<title>Or you can edit your data....</title>
	<author>sl149q</author>
	<datestamp>1259850360000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Is it just me or does this sound like an explanation for some of the Climategate science... But in that case they just massaged or ignored data that didn't agree with their conceptual framework of CO2 causing global warming.</p><p>Not that the skeptics are all that immune. They seem to cherry pick data almost as well (just not quite as successfully from the POV of selling their story to the media and political left<nobr> <wbr></nobr>..)</p></htmltext>
<tokenext>Is it just me or does this sound like an explanation for some of the Climategate science... But in that case they just massaged or ignored data that did n't agree with their conceptual framework of CO2 causing global warming.Not that the skeptics are all that immune .
They seem to cherry pick data almost as well ( just not quite as successfully from the POV of selling their story to the media and political left .. )</tokentext>
<sentencetext>Is it just me or does this sound like an explanation for some of the Climategate science... But in that case they just massaged or ignored data that didn't agree with their conceptual framework of CO2 causing global warming.Not that the skeptics are all that immune.
They seem to cherry pick data almost as well (just not quite as successfully from the POV of selling their story to the media and political left ..)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30605088</id>
	<title>Re:The problem... (maybe?)</title>
	<author>nine-times</author>
	<datestamp>1262274180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>What you say sounds pretty smart to me.  If we don't understand the simplest of nervous systems, then how could we understand the most complex?  On the other hand, it does seem worthwhile to come up with elaborate theories of complex systems.
</p><p>For one thing, we could analyze a simple system and come to completely incorrect ideas if we don't keep the more complex system in mind.  Any theory of simple brains and nervous systems has to offer some room for explanation as to how they can get as complex as our own brains.  To take it to the extreme, there's probably a limit to the knowledge that can be gained from analyzing a neuron on its own.
</p><p>Or thinking of another subject, it's unlikely that Newton would have come up with universal gravitation by looking at the sun's apparent motion.  He had to look at the whole universe-- the motion of all the stars and planets as well as looking at the motion of objects here on earth-- and then say, "These motions must all be caused by the same force."  Sometimes it's worth looking at the whole complex system.
</p><p>Besides that, the fact is that we probably care most about our own brains.  In a lot of ways, we're studying these simpler brains in order to increase our understanding of our own brains.  We want to be able to fix and control some of the problems in our own minds, and we want to replicate our minds.  Insofar as that's the goal, it makes sense to constantly be looking at what we understand and what we can accomplish now instead of waiting for complete and perfect knowledge sometime in the future.</p></htmltext>
<tokenext>What you say sounds pretty smart to me .
If we do n't understand the simplest of nervous systems , then how could we understand the most complex ?
On the other hand , it does seem worthwhile to come up with elaborate theories of complex systems .
For one thing , we could analyze a simple system and come to completely incorrect ideas if we do n't keep the more complex system in mind .
Any theory of simple brains and nervous systems has to offer some room for explanation as to how they can get as complex as our own brains .
To take it to the extreme , there 's probably a limit to the knowledge that can be gained from analyzing a neuron on its own .
Or thinking of another subject , it 's unlikely that Newton would have come up with universal gravitation by looking at the sun 's apparent motion .
He had to look at the whole universe-- the motion of all the stars and planets as well as looking at the motion of objects here on earth-- and then say , " These motions must all be caused by the same force .
" Sometimes it 's worth looking at the whole complex system .
Besides that , the fact is that we probably care most about our own brains .
In a lot of ways , we 're studying these simpler brains in order to increase our understanding of our own brains .
We want to be able to fix and control some of the problems in our own minds , and we want to replicate our minds .
Insofar as that 's the goal , it makes sense to constantly be looking at what we understand and what we can accomplish now instead of waiting for complete and perfect knowledge sometime in the future .</tokentext>
<sentencetext>What you say sounds pretty smart to me.
If we don't understand the simplest of nervous systems, then how could we understand the most complex?
On the other hand, it does seem worthwhile to come up with elaborate theories of complex systems.
For one thing, we could analyze a simple system and come to completely incorrect ideas if we don't keep the more complex system in mind.
Any theory of simple brains and nervous systems has to offer some room for explanation as to how they can get as complex as our own brains.
To take it to the extreme, there's probably a limit to the knowledge that can be gained from analyzing a neuron on its own.
Or thinking of another subject, it's unlikely that Newton would have come up with universal gravitation by looking at the sun's apparent motion.
He had to look at the whole universe-- the motion of all the stars and planets as well as looking at the motion of objects here on earth-- and then say, "These motions must all be caused by the same force.
"  Sometimes it's worth looking at the whole complex system.
Besides that, the fact is that we probably care most about our own brains.
In a lot of ways, we're studying these simpler brains in order to increase our understanding of our own brains.
We want to be able to fix and control some of the problems in our own minds, and we want to replicate our minds.
Insofar as that's the goal, it makes sense to constantly be looking at what we understand and what we can accomplish now instead of waiting for complete and perfect knowledge sometime in the future.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601988</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602002</id>
	<title>Re:Or you can edit your data....</title>
	<author>Anonymous</author>
	<datestamp>1259852280000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>By "almost as well" I assume you mean "all the time". The "sceptic" arguments are nothing but a parade of cherry picking with little attempt at genuine investigation.</p><p>And there's no real evidence of the proper scientists massaging or ignoring anything. Just because a detailed, written account of everything doesn't exist <i>in stolen, incomplete private documents</i> doesn't mean it doesn't exist at all.</p></htmltext>
<tokenext>By " almost as well " I assume you mean " all the time " .
The " sceptic " arguments are nothing but a parade of cherry picking with little attempt at genuine investigation.And there 's no real evidence of the proper scientists massaging or ignoring anything .
Just because a detailed , written account of everything does n't exist in stolen , incomplete private documents does n't mean it does n't exist at all .</tokentext>
<sentencetext>By "almost as well" I assume you mean "all the time".
The "sceptic" arguments are nothing but a parade of cherry picking with little attempt at genuine investigation.And there's no real evidence of the proper scientists massaging or ignoring anything.
Just because a detailed, written account of everything doesn't exist in stolen, incomplete private documents doesn't mean it doesn't exist at all.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601784</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601916</id>
	<title>Re:Ridiculous</title>
	<author>stms</author>
	<datestamp>1259851320000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Scientist are bullet proof, I have a 90\% certainty of that.</htmltext>
<tokenext>Scientist are bullet proof , I have a 90 \ % certainty of that .</tokentext>
<sentencetext>Scientist are bullet proof, I have a 90\% certainty of that.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601648</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601530</id>
	<title>ObClimate reference</title>
	<author>Anonymous</author>
	<datestamp>1259848140000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>"The science is settled!"<nobr> <wbr></nobr>:P</p></htmltext>
<tokenext>" The science is settled !
" : P</tokentext>
<sentencetext>"The science is settled!
" :P</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602016</id>
	<title>Re:Ridiculous</title>
	<author>nedlohs</author>
	<datestamp>1259852400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It can also be that their methodology is wrong, their equipment is wrong, they are simply incompetent.</p><p>I suspect that the main thing that differentiates scientists who make a major lasting contribution to science and those that just push knowledge slowly forward is whether they treat such problems as "Oh... that is interesting" or "Crap, must have screwed it up better start it over".</p><p>Of course I'm sure those in the first category spend a lot of their time chasing ghosts because most of the time they did in fact screw something up...</p></htmltext>
<tokenext>It can also be that their methodology is wrong , their equipment is wrong , they are simply incompetent.I suspect that the main thing that differentiates scientists who make a major lasting contribution to science and those that just push knowledge slowly forward is whether they treat such problems as " Oh... that is interesting " or " Crap , must have screwed it up better start it over " .Of course I 'm sure those in the first category spend a lot of their time chasing ghosts because most of the time they did in fact screw something up.. .</tokentext>
<sentencetext>It can also be that their methodology is wrong, their equipment is wrong, they are simply incompetent.I suspect that the main thing that differentiates scientists who make a major lasting contribution to science and those that just push knowledge slowly forward is whether they treat such problems as "Oh... that is interesting" or "Crap, must have screwed it up better start it over".Of course I'm sure those in the first category spend a lot of their time chasing ghosts because most of the time they did in fact screw something up...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601850</id>
	<title>Re:You never discard the data</title>
	<author>caramelcarrot</author>
	<datestamp>1259850720000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext>As other people have pointed out - sometimes the data is just crap due to the difficulty of making measurements. Sometimes you've measured something other than what you actually need to compare to theory, sometimes there's too much noise.

The skill of a great experimentalist is being able to take good enough data that you can't justify ignoring it if it comes out different to what you expected.</htmltext>
<tokenext>As other people have pointed out - sometimes the data is just crap due to the difficulty of making measurements .
Sometimes you 've measured something other than what you actually need to compare to theory , sometimes there 's too much noise .
The skill of a great experimentalist is being able to take good enough data that you ca n't justify ignoring it if it comes out different to what you expected .</tokentext>
<sentencetext>As other people have pointed out - sometimes the data is just crap due to the difficulty of making measurements.
Sometimes you've measured something other than what you actually need to compare to theory, sometimes there's too much noise.
The skill of a great experimentalist is being able to take good enough data that you can't justify ignoring it if it comes out different to what you expected.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602156</id>
	<title>Re:You never discard the data</title>
	<author>syousef</author>
	<datestamp>1259853900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>If the data don't make sense according to your theory, you don't discard the data, you discard the theory and work out a new one that fits the facts as you've observed them.</i></p><p>If it's a well established theory, you want to eliminate sources of the error before trying to overthrow it. For every Einstein that moves us to the next level from a well established theory there are 3 million cranks that just can't set up a well controlled experiment to save themselves. If you've conducted the experiment sufficiently badly the chances of working out what happened are nil. What you do is repeat the experiment correcting the errors and see what you get.</p><p><i>Alas, too many people who call themselves scientists are more interested in proving their pet theory true than in finding out what's actually going on.</i></p><p>It's a human frailty. Einstein wasted the later half of his career because he believed "God does not play dice", rather than accept Quantum theory. What a pity superstition had to come into it. A decade after his death Bell's Theorem has proven him wrong.</p></htmltext>
<tokenext>If the data do n't make sense according to your theory , you do n't discard the data , you discard the theory and work out a new one that fits the facts as you 've observed them.If it 's a well established theory , you want to eliminate sources of the error before trying to overthrow it .
For every Einstein that moves us to the next level from a well established theory there are 3 million cranks that just ca n't set up a well controlled experiment to save themselves .
If you 've conducted the experiment sufficiently badly the chances of working out what happened are nil .
What you do is repeat the experiment correcting the errors and see what you get.Alas , too many people who call themselves scientists are more interested in proving their pet theory true than in finding out what 's actually going on.It 's a human frailty .
Einstein wasted the later half of his career because he believed " God does not play dice " , rather than accept Quantum theory .
What a pity superstition had to come into it .
A decade after his death Bell 's Theorem has proven him wrong .</tokentext>
<sentencetext>If the data don't make sense according to your theory, you don't discard the data, you discard the theory and work out a new one that fits the facts as you've observed them.If it's a well established theory, you want to eliminate sources of the error before trying to overthrow it.
For every Einstein that moves us to the next level from a well established theory there are 3 million cranks that just can't set up a well controlled experiment to save themselves.
If you've conducted the experiment sufficiently badly the chances of working out what happened are nil.
What you do is repeat the experiment correcting the errors and see what you get.Alas, too many people who call themselves scientists are more interested in proving their pet theory true than in finding out what's actually going on.It's a human frailty.
Einstein wasted the later half of his career because he believed "God does not play dice", rather than accept Quantum theory.
What a pity superstition had to come into it.
A decade after his death Bell's Theorem has proven him wrong.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30603274</id>
	<title>Re:The problem... (maybe?)</title>
	<author>phantomfive</author>
	<datestamp>1259867520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Why not look at data wherever we can find it? If they find certain patterns that the human brain tends to go through, why not observe them and record them and understand them as well as possible?<br> <br>
It isn't always necessary to know the underlying, simpler systems before we get useful information. I can calculate the resultant change in velocity of two objects after a collision, even though I don't understand the full quantum-mechanical underpinnings of the collision.  I know what a computer will do when I call the printf() function, even though I'm not sure of all the math behind transistor construction.  Take data wherever you can find it.</htmltext>
<tokenext>Why not look at data wherever we can find it ?
If they find certain patterns that the human brain tends to go through , why not observe them and record them and understand them as well as possible ?
It is n't always necessary to know the underlying , simpler systems before we get useful information .
I can calculate the resultant change in velocity of two objects after a collision , even though I do n't understand the full quantum-mechanical underpinnings of the collision .
I know what a computer will do when I call the printf ( ) function , even though I 'm not sure of all the math behind transistor construction .
Take data wherever you can find it .</tokentext>
<sentencetext>Why not look at data wherever we can find it?
If they find certain patterns that the human brain tends to go through, why not observe them and record them and understand them as well as possible?
It isn't always necessary to know the underlying, simpler systems before we get useful information.
I can calculate the resultant change in velocity of two objects after a collision, even though I don't understand the full quantum-mechanical underpinnings of the collision.
I know what a computer will do when I call the printf() function, even though I'm not sure of all the math behind transistor construction.
Take data wherever you can find it.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601988</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601872</id>
	<title>Re:You never discard the data</title>
	<author>bcrowell</author>
	<datestamp>1259850840000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><blockquote><div><p>If the data don't make sense according to your theory, you don't discard the data, you discard the theory and work out a new one that fits the facts as you've observed them. TFA says that Dunbar was watching postdocs doing research, and if so, they should have known better. Alas, too many people who call themselves scientists are more interested in proving their pet theory true than in finding out what's actually going on.</p></div>
</blockquote><p>
This is a beautiful explanation of how science is supposed to work. In reality, science doesn't really work this way. It doesn't work this way in my experience as a scientist, and it doesn't work this way if you read the history of science.
</p><p>For some good historical examples, see Microbe Hunters, by de Kruif (one of the best science books of all time, although you have to look past the racism in some places -- de Kruif was born in 1890). A good example from physics is the Millikan oil-drop experiment, where he threw out all the data that didn't fit what he was trying to prove -- but then claimed in his paper that he'd never thrown out any data. Galileo described lots of experiments as if he'd done them, even though he didn't actually do them, or they wouldn't have actually come out the way he described.
</p><p>
Michelson and Morley set out to prove the existence of the aether, published their results believing they must be wrong. Nobody else believed them, either. Various people then spent the next 30 years trying to fix the experiment by doing things like taking the apparatus up to the top of a mountain, or doing the experiment in a tent, so that the aether wouldn't be pulled along with the earth or the walls of a building. By the time Einstein published special relativity in 1905, most physicists had either never heard of the MM experiment, or considered it inconclusive.
</p><p>
When your results come out goofy, 99.9\% of the time it's because you screwed up. You don't publish it, you go back and fix it. If every scientist published every result he didn't believe himself, the results would be disastrous. If you try over and over again to fix it, and you still fail, only then do you have to make a complicated judgment about whether to publish it or not.
</p><p>
The way science really works is not that scientists are disinterested. Scientists generally have extremely strong opinions that they set out to prove are true using experiments. The motivation is often that scientist A dislikes scientist B and wants to prove him wrong, or something similarly irrational, personal, or emotional. The reason this doesn't cause the downfall of science as an enterprise is that there are checks and balances built in. If A and B are enemies (and if you think the word "enemies" is too strong, you haven't spent much time around academics), and A publishes something, B may decide just to see if he can screw that sonofabitch A over by reproducing his work and finding something wrong with it. It's just like the adversarial system of justice. Society doesn't fall apart just because there are lawyers willing to represent nasty criminals. Einstein was famously asked what he would do if a certain experiment didn't come out consistent with relativity; his reply was that then the experiment would be wrong. Einstein fought against Bohr's quantum mechanics for decades. Bohr fought against Einstein's photons for decades. They were bitter rivals (and also good friends). It didn't matter that they were intensely prejudiced, and wrong 50\% of the time; in the end, things sorted themselves out.
</p></div>
	</htmltext>
<tokenext>If the data do n't make sense according to your theory , you do n't discard the data , you discard the theory and work out a new one that fits the facts as you 've observed them .
TFA says that Dunbar was watching postdocs doing research , and if so , they should have known better .
Alas , too many people who call themselves scientists are more interested in proving their pet theory true than in finding out what 's actually going on .
This is a beautiful explanation of how science is supposed to work .
In reality , science does n't really work this way .
It does n't work this way in my experience as a scientist , and it does n't work this way if you read the history of science .
For some good historical examples , see Microbe Hunters , by de Kruif ( one of the best science books of all time , although you have to look past the racism in some places -- de Kruif was born in 1890 ) .
A good example from physics is the Millikan oil-drop experiment , where he threw out all the data that did n't fit what he was trying to prove -- but then claimed in his paper that he 'd never thrown out any data .
Galileo described lots of experiments as if he 'd done them , even though he did n't actually do them , or they would n't have actually come out the way he described .
Michelson and Morley set out to prove the existence of the aether , published their results believing they must be wrong .
Nobody else believed them , either .
Various people then spent the next 30 years trying to fix the experiment by doing things like taking the apparatus up to the top of a mountain , or doing the experiment in a tent , so that the aether would n't be pulled along with the earth or the walls of a building .
By the time Einstein published special relativity in 1905 , most physicists had either never heard of the MM experiment , or considered it inconclusive .
When your results come out goofy , 99.9 \ % of the time it 's because you screwed up .
You do n't publish it , you go back and fix it .
If every scientist published every result he did n't believe himself , the results would be disastrous .
If you try over and over again to fix it , and you still fail , only then do you have to make a complicated judgment about whether to publish it or not .
The way science really works is not that scientists are disinterested .
Scientists generally have extremely strong opinions that they set out to prove are true using experiments .
The motivation is often that scientist A dislikes scientist B and wants to prove him wrong , or something similarly irrational , personal , or emotional .
The reason this does n't cause the downfall of science as an enterprise is that there are checks and balances built in .
If A and B are enemies ( and if you think the word " enemies " is too strong , you have n't spent much time around academics ) , and A publishes something , B may decide just to see if he can screw that sonofabitch A over by reproducing his work and finding something wrong with it .
It 's just like the adversarial system of justice .
Society does n't fall apart just because there are lawyers willing to represent nasty criminals .
Einstein was famously asked what he would do if a certain experiment did n't come out consistent with relativity ; his reply was that then the experiment would be wrong .
Einstein fought against Bohr 's quantum mechanics for decades .
Bohr fought against Einstein 's photons for decades .
They were bitter rivals ( and also good friends ) .
It did n't matter that they were intensely prejudiced , and wrong 50 \ % of the time ; in the end , things sorted themselves out .</tokentext>
<sentencetext>If the data don't make sense according to your theory, you don't discard the data, you discard the theory and work out a new one that fits the facts as you've observed them.
TFA says that Dunbar was watching postdocs doing research, and if so, they should have known better.
Alas, too many people who call themselves scientists are more interested in proving their pet theory true than in finding out what's actually going on.
This is a beautiful explanation of how science is supposed to work.
In reality, science doesn't really work this way.
It doesn't work this way in my experience as a scientist, and it doesn't work this way if you read the history of science.
For some good historical examples, see Microbe Hunters, by de Kruif (one of the best science books of all time, although you have to look past the racism in some places -- de Kruif was born in 1890).
A good example from physics is the Millikan oil-drop experiment, where he threw out all the data that didn't fit what he was trying to prove -- but then claimed in his paper that he'd never thrown out any data.
Galileo described lots of experiments as if he'd done them, even though he didn't actually do them, or they wouldn't have actually come out the way he described.
Michelson and Morley set out to prove the existence of the aether, published their results believing they must be wrong.
Nobody else believed them, either.
Various people then spent the next 30 years trying to fix the experiment by doing things like taking the apparatus up to the top of a mountain, or doing the experiment in a tent, so that the aether wouldn't be pulled along with the earth or the walls of a building.
By the time Einstein published special relativity in 1905, most physicists had either never heard of the MM experiment, or considered it inconclusive.
When your results come out goofy, 99.9\% of the time it's because you screwed up.
You don't publish it, you go back and fix it.
If every scientist published every result he didn't believe himself, the results would be disastrous.
If you try over and over again to fix it, and you still fail, only then do you have to make a complicated judgment about whether to publish it or not.
The way science really works is not that scientists are disinterested.
Scientists generally have extremely strong opinions that they set out to prove are true using experiments.
The motivation is often that scientist A dislikes scientist B and wants to prove him wrong, or something similarly irrational, personal, or emotional.
The reason this doesn't cause the downfall of science as an enterprise is that there are checks and balances built in.
If A and B are enemies (and if you think the word "enemies" is too strong, you haven't spent much time around academics), and A publishes something, B may decide just to see if he can screw that sonofabitch A over by reproducing his work and finding something wrong with it.
It's just like the adversarial system of justice.
Society doesn't fall apart just because there are lawyers willing to represent nasty criminals.
Einstein was famously asked what he would do if a certain experiment didn't come out consistent with relativity; his reply was that then the experiment would be wrong.
Einstein fought against Bohr's quantum mechanics for decades.
Bohr fought against Einstein's photons for decades.
They were bitter rivals (and also good friends).
It didn't matter that they were intensely prejudiced, and wrong 50\% of the time; in the end, things sorted themselves out.

	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30605532</id>
	<title>Re:The problem... (maybe?)</title>
	<author>Khelder</author>
	<datestamp>1262276940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>So would you also advocate giving up on, say, weather prediction? Until we can come up with a model of the weather based on experimentally-tested theories such as how individual molecules of air move and interact?</p><p>It would be really helpful to me professionally if we had models of people that were more like our models of bridges, car engines, airplanes, integrated circuits, etc. But modelling humans explicitly as systems of millions of individual units (e.g., cells, neurons) is going to be a long time coming. Current models of human perception, congition, and so on, are far from perfect, but they're sitll a lot better than nothing.</p></htmltext>
<tokenext>So would you also advocate giving up on , say , weather prediction ?
Until we can come up with a model of the weather based on experimentally-tested theories such as how individual molecules of air move and interact ? It would be really helpful to me professionally if we had models of people that were more like our models of bridges , car engines , airplanes , integrated circuits , etc .
But modelling humans explicitly as systems of millions of individual units ( e.g. , cells , neurons ) is going to be a long time coming .
Current models of human perception , congition , and so on , are far from perfect , but they 're sitll a lot better than nothing .</tokentext>
<sentencetext>So would you also advocate giving up on, say, weather prediction?
Until we can come up with a model of the weather based on experimentally-tested theories such as how individual molecules of air move and interact?It would be really helpful to me professionally if we had models of people that were more like our models of bridges, car engines, airplanes, integrated circuits, etc.
But modelling humans explicitly as systems of millions of individual units (e.g., cells, neurons) is going to be a long time coming.
Current models of human perception, congition, and so on, are far from perfect, but they're sitll a lot better than nothing.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601988</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602850</id>
	<title>Or more likely....</title>
	<author>Anonymous</author>
	<datestamp>1259861400000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>More likely B ends up on the journal peer review panel because he is a respected pillar of his field, and causes pesky upstart A's paper to be rejected for publication.  Forcing the field to wait 40 years for B and his ilk to shuffle off before followers of "crackpot" A can finally get their corroborating data published.</htmltext>
<tokenext>More likely B ends up on the journal peer review panel because he is a respected pillar of his field , and causes pesky upstart A 's paper to be rejected for publication .
Forcing the field to wait 40 years for B and his ilk to shuffle off before followers of " crackpot " A can finally get their corroborating data published .</tokentext>
<sentencetext>More likely B ends up on the journal peer review panel because he is a respected pillar of his field, and causes pesky upstart A's paper to be rejected for publication.
Forcing the field to wait 40 years for B and his ilk to shuffle off before followers of "crackpot" A can finally get their corroborating data published.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601872</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30604472</id>
	<title>Re:You never discard the data</title>
	<author>Mutatis Mutandis</author>
	<datestamp>1262269140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Yes, but that pre-supposes the ability to invent a new theory, because scientists are very unwilling to discard a theory if they have no alternative. After all, having no theoretical framework at all is very uncomfortable.</p><p>And there I do think that Dunbar makes a perfectly valid point: Any group of specialists who are all of the same mind is very bad a thinking "out of the box" and inventing a new theory. To be able to do that, you need a healthy mixture of different backgrounds, and enough dissent to stimulate the debate. Unfortunately scientists often assemble in excessively homogeneous groups, sometimes on their own initiative ("old boys network") and sometimes through deliberate but foolish policy ("center of excellence").</p><p>This is part of the reason why publishing and attending scientific congresses is a vital part of scientific activity. I've often noticed that in industry, this is regarded as a kind of bonus, a perk that allows scientists to travel to nice places. But is absolutely essential, even if that means talking things through with direct competitors.</p></htmltext>
<tokenext>Yes , but that pre-supposes the ability to invent a new theory , because scientists are very unwilling to discard a theory if they have no alternative .
After all , having no theoretical framework at all is very uncomfortable.And there I do think that Dunbar makes a perfectly valid point : Any group of specialists who are all of the same mind is very bad a thinking " out of the box " and inventing a new theory .
To be able to do that , you need a healthy mixture of different backgrounds , and enough dissent to stimulate the debate .
Unfortunately scientists often assemble in excessively homogeneous groups , sometimes on their own initiative ( " old boys network " ) and sometimes through deliberate but foolish policy ( " center of excellence " ) .This is part of the reason why publishing and attending scientific congresses is a vital part of scientific activity .
I 've often noticed that in industry , this is regarded as a kind of bonus , a perk that allows scientists to travel to nice places .
But is absolutely essential , even if that means talking things through with direct competitors .</tokentext>
<sentencetext>Yes, but that pre-supposes the ability to invent a new theory, because scientists are very unwilling to discard a theory if they have no alternative.
After all, having no theoretical framework at all is very uncomfortable.And there I do think that Dunbar makes a perfectly valid point: Any group of specialists who are all of the same mind is very bad a thinking "out of the box" and inventing a new theory.
To be able to do that, you need a healthy mixture of different backgrounds, and enough dissent to stimulate the debate.
Unfortunately scientists often assemble in excessively homogeneous groups, sometimes on their own initiative ("old boys network") and sometimes through deliberate but foolish policy ("center of excellence").This is part of the reason why publishing and attending scientific congresses is a vital part of scientific activity.
I've often noticed that in industry, this is regarded as a kind of bonus, a perk that allows scientists to travel to nice places.
But is absolutely essential, even if that means talking things through with direct competitors.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30604426</id>
	<title>Re:Ridiculous</title>
	<author>Anonymous</author>
	<datestamp>1262268600000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><i>If they're really throwing out date just because it 'doesn't make sense', they're doing religion, not science.</i> </p><p>Religion never makes any sense.</p></htmltext>
<tokenext>If they 're really throwing out date just because it 'does n't make sense ' , they 're doing religion , not science .
Religion never makes any sense .</tokentext>
<sentencetext>If they're really throwing out date just because it 'doesn't make sense', they're doing religion, not science.
Religion never makes any sense.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30606876</id>
	<title>Re:Ridiculous</title>
	<author>Anonymous</author>
	<datestamp>1262283000000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>But how good are most of us at formulating new hypotheses? I am inclined to interpret that claim that the <em>the data didn't make sense</em> as an admission that <em>I can't make sense of the data</em>, which is something else. However, useful interpretation of such data does requires a hypothesis of some kind.</p><p>If you can't think of a new hypothesis, or if you can think of one but not of a valid scientific method to test it, then it may be best to move on, because data that cannot be understood are of little use to to the scientists that generated them. (Perhaps they might be to someone else, but they are almost impossible to publish.) I concede that people who can never think of a new hypothesis are not good scientists, but sometimes even the best are just baffled.</p><p>Besides the threshold of finding a new hypothesis, there is also the problem of having it accepted by your colleagues. I once lost a battle to get a new hypothesis inserted in an article, not even because its merit was questioned, but purely on the ground that its explanation involved mathematical formulas, and I was assured that Journal X would never print mathematical formulas. I was told that in the branch of science concerned, that just wasn't done. And no, I am not that old: This was in 2008.</p><p>There also is the discouraging factor that most hypothesis turn out to be wrong, even if great care is taken in constructing them. Most laboratory scientists have some confidence in their ability to generate valid data, but writing down a hypothesis is a larger risk...</p><p>A recent paper by Munos (Nature Reviews Drug Discovery of December) points out the universal failure of the various methods that the pharmaceutical industry has tried to predict success or enhance the probability of success. Despite being motivated by expense bills that run in the hundreds of millions of dollars, the number of successes is primarily linked to the number of attempts, with very few indications that human attempts to outsmart nature are of any avail.</p></htmltext>
<tokenext>But how good are most of us at formulating new hypotheses ?
I am inclined to interpret that claim that the the data did n't make sense as an admission that I ca n't make sense of the data , which is something else .
However , useful interpretation of such data does requires a hypothesis of some kind.If you ca n't think of a new hypothesis , or if you can think of one but not of a valid scientific method to test it , then it may be best to move on , because data that can not be understood are of little use to to the scientists that generated them .
( Perhaps they might be to someone else , but they are almost impossible to publish .
) I concede that people who can never think of a new hypothesis are not good scientists , but sometimes even the best are just baffled.Besides the threshold of finding a new hypothesis , there is also the problem of having it accepted by your colleagues .
I once lost a battle to get a new hypothesis inserted in an article , not even because its merit was questioned , but purely on the ground that its explanation involved mathematical formulas , and I was assured that Journal X would never print mathematical formulas .
I was told that in the branch of science concerned , that just was n't done .
And no , I am not that old : This was in 2008.There also is the discouraging factor that most hypothesis turn out to be wrong , even if great care is taken in constructing them .
Most laboratory scientists have some confidence in their ability to generate valid data , but writing down a hypothesis is a larger risk...A recent paper by Munos ( Nature Reviews Drug Discovery of December ) points out the universal failure of the various methods that the pharmaceutical industry has tried to predict success or enhance the probability of success .
Despite being motivated by expense bills that run in the hundreds of millions of dollars , the number of successes is primarily linked to the number of attempts , with very few indications that human attempts to outsmart nature are of any avail .</tokentext>
<sentencetext>But how good are most of us at formulating new hypotheses?
I am inclined to interpret that claim that the the data didn't make sense as an admission that I can't make sense of the data, which is something else.
However, useful interpretation of such data does requires a hypothesis of some kind.If you can't think of a new hypothesis, or if you can think of one but not of a valid scientific method to test it, then it may be best to move on, because data that cannot be understood are of little use to to the scientists that generated them.
(Perhaps they might be to someone else, but they are almost impossible to publish.
) I concede that people who can never think of a new hypothesis are not good scientists, but sometimes even the best are just baffled.Besides the threshold of finding a new hypothesis, there is also the problem of having it accepted by your colleagues.
I once lost a battle to get a new hypothesis inserted in an article, not even because its merit was questioned, but purely on the ground that its explanation involved mathematical formulas, and I was assured that Journal X would never print mathematical formulas.
I was told that in the branch of science concerned, that just wasn't done.
And no, I am not that old: This was in 2008.There also is the discouraging factor that most hypothesis turn out to be wrong, even if great care is taken in constructing them.
Most laboratory scientists have some confidence in their ability to generate valid data, but writing down a hypothesis is a larger risk...A recent paper by Munos (Nature Reviews Drug Discovery of December) points out the universal failure of the various methods that the pharmaceutical industry has tried to predict success or enhance the probability of success.
Despite being motivated by expense bills that run in the hundreds of millions of dollars, the number of successes is primarily linked to the number of attempts, with very few indications that human attempts to outsmart nature are of any avail.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602922</id>
	<title>Re:Ridiculous</title>
	<author>sarkeizen</author>
	<datestamp>1259862480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Ok I'm really not clear on what you're trying to illustrate here.
<br> <br>
Sure you can get a minority result from an experiment but the less likely the outcome, the more costly it is to get those results and the less likely anyone else who repeats that experiment will see the same result set.
<br> <br>
In other words if you tried to rig a test with a 95\% confidence level.  You would have to run the same test twenty times just to guarantee a result outside your confidence interval *but* that doesn't guarantee that the result shifts in the direction you wish the result to go.
<br> <br>
Not only that but any other group who repeats that experiment is more likely than not to end up with a result that differs from yours.  Worse yet, if they happen to have a significantly larger N (or some other feature that would tend to improve accuracy) then it's your experiment not theirs that will tend to be looked upon skeptically.
<br> <br>
So what exactly is bothering you here?  That people rig results by spending twenty times the amount of grant money?  At those prices and at that risk you might as well just "shape" your data or some other kind low-tech scam.
<br> <br>
An actual thing to be concerned of is *NOT* the idea that there is some slight chance of fixing results but rather that there is no obligation to publish.  So that someone who funds a study who doesn't like the outcome can make sure that it doesn't see the light of day.  This is a real problem in fields like medicine.</htmltext>
<tokenext>Ok I 'm really not clear on what you 're trying to illustrate here .
Sure you can get a minority result from an experiment but the less likely the outcome , the more costly it is to get those results and the less likely anyone else who repeats that experiment will see the same result set .
In other words if you tried to rig a test with a 95 \ % confidence level .
You would have to run the same test twenty times just to guarantee a result outside your confidence interval * but * that does n't guarantee that the result shifts in the direction you wish the result to go .
Not only that but any other group who repeats that experiment is more likely than not to end up with a result that differs from yours .
Worse yet , if they happen to have a significantly larger N ( or some other feature that would tend to improve accuracy ) then it 's your experiment not theirs that will tend to be looked upon skeptically .
So what exactly is bothering you here ?
That people rig results by spending twenty times the amount of grant money ?
At those prices and at that risk you might as well just " shape " your data or some other kind low-tech scam .
An actual thing to be concerned of is * NOT * the idea that there is some slight chance of fixing results but rather that there is no obligation to publish .
So that someone who funds a study who does n't like the outcome can make sure that it does n't see the light of day .
This is a real problem in fields like medicine .</tokentext>
<sentencetext>Ok I'm really not clear on what you're trying to illustrate here.
Sure you can get a minority result from an experiment but the less likely the outcome, the more costly it is to get those results and the less likely anyone else who repeats that experiment will see the same result set.
In other words if you tried to rig a test with a 95\% confidence level.
You would have to run the same test twenty times just to guarantee a result outside your confidence interval *but* that doesn't guarantee that the result shifts in the direction you wish the result to go.
Not only that but any other group who repeats that experiment is more likely than not to end up with a result that differs from yours.
Worse yet, if they happen to have a significantly larger N (or some other feature that would tend to improve accuracy) then it's your experiment not theirs that will tend to be looked upon skeptically.
So what exactly is bothering you here?
That people rig results by spending twenty times the amount of grant money?
At those prices and at that risk you might as well just "shape" your data or some other kind low-tech scam.
An actual thing to be concerned of is *NOT* the idea that there is some slight chance of fixing results but rather that there is no obligation to publish.
So that someone who funds a study who doesn't like the outcome can make sure that it doesn't see the light of day.
This is a real problem in fields like medicine.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601648</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30611738</id>
	<title>Re:Sometimes screwing up leads to success ...</title>
	<author>Anonymous</author>
	<datestamp>1262271000000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Wow, how apropos.  This article explains the East Anglia University CRU folks and other global warming alarmists "doctoring" their data to prove their pet CO2 theory.  The facts now appear to strongly suggest that cosmic radiation penetration of earth's atmosphere - regulated by the solar wind - is what controls cloud development and cover, and thus the global climate of earth.  About 1 Billion years worth of ocean floor sediment data strongly backs up this new theory.  If this is new to you, please do some research or watch this video.<br>http://www.youtube.com/watch?v=dKoUwttE0BA</p><p>http://en.wikipedia.org/wiki/Henrik\_Svensmark<br>Is he the next Darwin?</p></htmltext>
<tokenext>Wow , how apropos .
This article explains the East Anglia University CRU folks and other global warming alarmists " doctoring " their data to prove their pet CO2 theory .
The facts now appear to strongly suggest that cosmic radiation penetration of earth 's atmosphere - regulated by the solar wind - is what controls cloud development and cover , and thus the global climate of earth .
About 1 Billion years worth of ocean floor sediment data strongly backs up this new theory .
If this is new to you , please do some research or watch this video.http : //www.youtube.com/watch ? v = dKoUwttE0BAhttp : //en.wikipedia.org/wiki/Henrik \ _SvensmarkIs he the next Darwin ?</tokentext>
<sentencetext>Wow, how apropos.
This article explains the East Anglia University CRU folks and other global warming alarmists "doctoring" their data to prove their pet CO2 theory.
The facts now appear to strongly suggest that cosmic radiation penetration of earth's atmosphere - regulated by the solar wind - is what controls cloud development and cover, and thus the global climate of earth.
About 1 Billion years worth of ocean floor sediment data strongly backs up this new theory.
If this is new to you, please do some research or watch this video.http://www.youtube.com/watch?v=dKoUwttE0BAhttp://en.wikipedia.org/wiki/Henrik\_SvensmarkIs he the next Darwin?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601542</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30606932</id>
	<title>Re:Sometimes screwing up leads to success ...</title>
	<author>s2theg</author>
	<datestamp>1262283240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>"Sometimes screwing up leads to success." the moral of the story here is: discover, don't invent.
<br> <br>
That failure leads to success is hard coded into the Scientific method.
<br> <br>
Part of success is being wrong, facing it, and adjusting your assumptions to fit the facts.</htmltext>
<tokenext>" Sometimes screwing up leads to success .
" the moral of the story here is : discover , do n't invent .
That failure leads to success is hard coded into the Scientific method .
Part of success is being wrong , facing it , and adjusting your assumptions to fit the facts .</tokentext>
<sentencetext>"Sometimes screwing up leads to success.
" the moral of the story here is: discover, don't invent.
That failure leads to success is hard coded into the Scientific method.
Part of success is being wrong, facing it, and adjusting your assumptions to fit the facts.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601542</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602200</id>
	<title>Re:You never discard the data</title>
	<author>Anonymous</author>
	<datestamp>1259854440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>If the data don't make sense according to your theory, you don't discard the data, you discard the theory and work out a new one that fits the facts as you've observed them.  TFA says that Dunbar was watching postdocs doing research, and if so, they should have known better.  Alas, too many people who call themselves scientists are more interested in proving their pet theory true than in finding out what's actually going on.</p></div><p>Not a climate "scientist", are you?</p></div>
	</htmltext>
<tokenext>If the data do n't make sense according to your theory , you do n't discard the data , you discard the theory and work out a new one that fits the facts as you 've observed them .
TFA says that Dunbar was watching postdocs doing research , and if so , they should have known better .
Alas , too many people who call themselves scientists are more interested in proving their pet theory true than in finding out what 's actually going on.Not a climate " scientist " , are you ?</tokentext>
<sentencetext>If the data don't make sense according to your theory, you don't discard the data, you discard the theory and work out a new one that fits the facts as you've observed them.
TFA says that Dunbar was watching postdocs doing research, and if so, they should have known better.
Alas, too many people who call themselves scientists are more interested in proving their pet theory true than in finding out what's actually going on.Not a climate "scientist", are you?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602140</id>
	<title>Re:Ridiculous</title>
	<author>BlueParrot</author>
	<datestamp>1259853660000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>"It wasn't uncommon for someone to spend a month on a project and then just discard all their data because the data didn't make sense."</p><p>That doesn't mean the data is wrong, it means the<nobr> <wbr></nobr>/hypothesis/ was wrong, if not the theory, and needs to be modified.</p><p>If they're really throwing out date just because it 'doesn't make sense', they're doing religion, not science.</p><p>a) You've clearly never done any real research or you would be well aware of the hundreds of millions of ways you can screw up an experiment and get nonsense data ( bad machinery, you wired up a detector wrong, the cell lines you were feeding vitamin K happened to get contaminated by bacteria halfway through etc... )</p><p>b) There is almost never a clear difference between data and theory. The only raw data you have is a bunch of numbers on a piece of paper, in order to determine if they correspond to your theory or not you need to interpret the numbers somehow, and it may just as well be the interpretation that is wrong as is the theory you were trying to test using the interpreted data.</p><p>c) Because you are often restricted by cost and time it's often not feasible to do a full analysis of why your experiment did not work. Hence if you did not get any useful results ( uncertainty was too large, it seems obvious you must have messed up somewhere etc.. ) then frequently the only sane option is to conclude your experiment was a failure.</p><p>d) If scientists followed your advice we would never have got the electronic equipment you used to make your post.</p><p>Basically your ideas about what science is or should be are extremely naive and to anybody who has done even a high school chemistry experiment it should be clear you have no idea what you're talking about.</p></htmltext>
<tokenext>" It was n't uncommon for someone to spend a month on a project and then just discard all their data because the data did n't make sense .
" That does n't mean the data is wrong , it means the /hypothesis/ was wrong , if not the theory , and needs to be modified.If they 're really throwing out date just because it 'does n't make sense ' , they 're doing religion , not science.a ) You 've clearly never done any real research or you would be well aware of the hundreds of millions of ways you can screw up an experiment and get nonsense data ( bad machinery , you wired up a detector wrong , the cell lines you were feeding vitamin K happened to get contaminated by bacteria halfway through etc... ) b ) There is almost never a clear difference between data and theory .
The only raw data you have is a bunch of numbers on a piece of paper , in order to determine if they correspond to your theory or not you need to interpret the numbers somehow , and it may just as well be the interpretation that is wrong as is the theory you were trying to test using the interpreted data.c ) Because you are often restricted by cost and time it 's often not feasible to do a full analysis of why your experiment did not work .
Hence if you did not get any useful results ( uncertainty was too large , it seems obvious you must have messed up somewhere etc.. ) then frequently the only sane option is to conclude your experiment was a failure.d ) If scientists followed your advice we would never have got the electronic equipment you used to make your post.Basically your ideas about what science is or should be are extremely naive and to anybody who has done even a high school chemistry experiment it should be clear you have no idea what you 're talking about .</tokentext>
<sentencetext>"It wasn't uncommon for someone to spend a month on a project and then just discard all their data because the data didn't make sense.
"That doesn't mean the data is wrong, it means the /hypothesis/ was wrong, if not the theory, and needs to be modified.If they're really throwing out date just because it 'doesn't make sense', they're doing religion, not science.a) You've clearly never done any real research or you would be well aware of the hundreds of millions of ways you can screw up an experiment and get nonsense data ( bad machinery, you wired up a detector wrong, the cell lines you were feeding vitamin K happened to get contaminated by bacteria halfway through etc... )b) There is almost never a clear difference between data and theory.
The only raw data you have is a bunch of numbers on a piece of paper, in order to determine if they correspond to your theory or not you need to interpret the numbers somehow, and it may just as well be the interpretation that is wrong as is the theory you were trying to test using the interpreted data.c) Because you are often restricted by cost and time it's often not feasible to do a full analysis of why your experiment did not work.
Hence if you did not get any useful results ( uncertainty was too large, it seems obvious you must have messed up somewhere etc.. ) then frequently the only sane option is to conclude your experiment was a failure.d) If scientists followed your advice we would never have got the electronic equipment you used to make your post.Basically your ideas about what science is or should be are extremely naive and to anybody who has done even a high school chemistry experiment it should be clear you have no idea what you're talking about.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586</id>
	<title>You never discard the data</title>
	<author>techno-vampire</author>
	<datestamp>1259848560000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext>If the data don't make sense according to your theory, you don't discard the data, you discard the theory and work out a new one that fits the facts as you've observed them.  TFA says that Dunbar was watching postdocs doing research, and if so, they should have known better.  Alas, too many people who call themselves scientists are more interested in proving their pet theory true than in finding out what's actually going on.</htmltext>
<tokenext>If the data do n't make sense according to your theory , you do n't discard the data , you discard the theory and work out a new one that fits the facts as you 've observed them .
TFA says that Dunbar was watching postdocs doing research , and if so , they should have known better .
Alas , too many people who call themselves scientists are more interested in proving their pet theory true than in finding out what 's actually going on .</tokentext>
<sentencetext>If the data don't make sense according to your theory, you don't discard the data, you discard the theory and work out a new one that fits the facts as you've observed them.
TFA says that Dunbar was watching postdocs doing research, and if so, they should have known better.
Alas, too many people who call themselves scientists are more interested in proving their pet theory true than in finding out what's actually going on.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601816</id>
	<title>Re:You never discard the data</title>
	<author>Anonymous</author>
	<datestamp>1259850540000</datestamp>
	<modclass>Troll</modclass>
	<modscore>0</modscore>
	<htmltext><p>The problem is you generally do not get money to simply study X.</p><p>You get money to show that X affects Y in manner Z. If X doesn't affect Y in manner Z they pull your funding, give you a failing grade, or otherwise find ways to punish the results.</p><p>They do this over and over again and then wonder why researches fake data, toss good data out and re-do the study looking for results they want instead of what is.</p><p>Want to do a drug study that says Drug A is safe to fight cancer? Got results that indicate an increase risk of heart attack? Have the study declared flawed, re-do the study with a slightly different mix of subjects and repeat. With luck your new study shows the heart attack risk is below the error threshold of your study and you can ignore it. Release your drug, make your millions and, after you leave have the real-world implicates show up on the 5 o'clock news.</p></htmltext>
<tokenext>The problem is you generally do not get money to simply study X.You get money to show that X affects Y in manner Z. If X does n't affect Y in manner Z they pull your funding , give you a failing grade , or otherwise find ways to punish the results.They do this over and over again and then wonder why researches fake data , toss good data out and re-do the study looking for results they want instead of what is.Want to do a drug study that says Drug A is safe to fight cancer ?
Got results that indicate an increase risk of heart attack ?
Have the study declared flawed , re-do the study with a slightly different mix of subjects and repeat .
With luck your new study shows the heart attack risk is below the error threshold of your study and you can ignore it .
Release your drug , make your millions and , after you leave have the real-world implicates show up on the 5 o'clock news .</tokentext>
<sentencetext>The problem is you generally do not get money to simply study X.You get money to show that X affects Y in manner Z. If X doesn't affect Y in manner Z they pull your funding, give you a failing grade, or otherwise find ways to punish the results.They do this over and over again and then wonder why researches fake data, toss good data out and re-do the study looking for results they want instead of what is.Want to do a drug study that says Drug A is safe to fight cancer?
Got results that indicate an increase risk of heart attack?
Have the study declared flawed, re-do the study with a slightly different mix of subjects and repeat.
With luck your new study shows the heart attack risk is below the error threshold of your study and you can ignore it.
Release your drug, make your millions and, after you leave have the real-world implicates show up on the 5 o'clock news.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30605872</id>
	<title>Re:You never discard the data</title>
	<author>Quirkz</author>
	<datestamp>1262278800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>If the data don't make sense according to your theory, you don't discard the data, you discard the theory and work out a new one that fits the facts as you've observed them.</p> </div><p>I'm more computer science than research science, but I've seen plenty of examples where throwing out data is the sensible thing to do. Computer analogy time:</p><p>

Sometimes an application isn't working. So you set up experiments, adjusting settings, trying to find out what's wrong with the application. You get a good setting, for ten minutes, but then it goes bad. So you turn it back to the old setting that used to mostly work, but had one flaw. Now that flaw is gone, but you're getting a completely different error. The data itself is inconsistent and nonsensical. That's when you step back, reconsider what you think you know, and possibly throw out all the data as worthless.</p><p>

Why? Well maybe the software isn't bad at all. It's actually a memory problem or a bad hard drive. Something else isn't the way you expected, so everything it seemed like you were learning was worthless.</p><p>

Now if bad data leads you in the right direction, that's still valuable, even if you throw the data out in the process. Other times bad data eventually leads you to a human error (troubleshooting that network problem was unnecessary once you realize you didn't have the cable plugged in) and you've got to start over.</p></div>
	</htmltext>
<tokenext>If the data do n't make sense according to your theory , you do n't discard the data , you discard the theory and work out a new one that fits the facts as you 've observed them .
I 'm more computer science than research science , but I 've seen plenty of examples where throwing out data is the sensible thing to do .
Computer analogy time : Sometimes an application is n't working .
So you set up experiments , adjusting settings , trying to find out what 's wrong with the application .
You get a good setting , for ten minutes , but then it goes bad .
So you turn it back to the old setting that used to mostly work , but had one flaw .
Now that flaw is gone , but you 're getting a completely different error .
The data itself is inconsistent and nonsensical .
That 's when you step back , reconsider what you think you know , and possibly throw out all the data as worthless .
Why ? Well maybe the software is n't bad at all .
It 's actually a memory problem or a bad hard drive .
Something else is n't the way you expected , so everything it seemed like you were learning was worthless .
Now if bad data leads you in the right direction , that 's still valuable , even if you throw the data out in the process .
Other times bad data eventually leads you to a human error ( troubleshooting that network problem was unnecessary once you realize you did n't have the cable plugged in ) and you 've got to start over .</tokentext>
<sentencetext>If the data don't make sense according to your theory, you don't discard the data, you discard the theory and work out a new one that fits the facts as you've observed them.
I'm more computer science than research science, but I've seen plenty of examples where throwing out data is the sensible thing to do.
Computer analogy time:

Sometimes an application isn't working.
So you set up experiments, adjusting settings, trying to find out what's wrong with the application.
You get a good setting, for ten minutes, but then it goes bad.
So you turn it back to the old setting that used to mostly work, but had one flaw.
Now that flaw is gone, but you're getting a completely different error.
The data itself is inconsistent and nonsensical.
That's when you step back, reconsider what you think you know, and possibly throw out all the data as worthless.
Why? Well maybe the software isn't bad at all.
It's actually a memory problem or a bad hard drive.
Something else isn't the way you expected, so everything it seemed like you were learning was worthless.
Now if bad data leads you in the right direction, that's still valuable, even if you throw the data out in the process.
Other times bad data eventually leads you to a human error (troubleshooting that network problem was unnecessary once you realize you didn't have the cable plugged in) and you've got to start over.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601956</id>
	<title>Re:Ridiculous</title>
	<author>TapeCutter</author>
	<datestamp>1259851800000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><i>"If you are willing to run an experiment enough times, you will eventually get data to support your assertions."</i>
<br> <br>
Yes, I belive Edison tried over 5000 different hand made bulb/filiment combinations before he found one that supported his assertion.
<br> <br>
Thowing out data is not about proving pet theories, it's about admitting you cocked up the experiment.
eg: <a href="http://www.abc.net.au/science/features/whyisitso/" title="abc.net.au">Prof Sumner Miller</a> [abc.net.au] never edited out failed demonstrations from his TV show, nor did he claim the failed demo proved accepted theories of physics were wrong, rather he would simply exclaim - "Experiments never fail, it is I who have failed to set the right conditions for nature to cooperate" and then try again.</htmltext>
<tokenext>" If you are willing to run an experiment enough times , you will eventually get data to support your assertions .
" Yes , I belive Edison tried over 5000 different hand made bulb/filiment combinations before he found one that supported his assertion .
Thowing out data is not about proving pet theories , it 's about admitting you cocked up the experiment .
eg : Prof Sumner Miller [ abc.net.au ] never edited out failed demonstrations from his TV show , nor did he claim the failed demo proved accepted theories of physics were wrong , rather he would simply exclaim - " Experiments never fail , it is I who have failed to set the right conditions for nature to cooperate " and then try again .</tokentext>
<sentencetext>"If you are willing to run an experiment enough times, you will eventually get data to support your assertions.
"
 
Yes, I belive Edison tried over 5000 different hand made bulb/filiment combinations before he found one that supported his assertion.
Thowing out data is not about proving pet theories, it's about admitting you cocked up the experiment.
eg: Prof Sumner Miller [abc.net.au] never edited out failed demonstrations from his TV show, nor did he claim the failed demo proved accepted theories of physics were wrong, rather he would simply exclaim - "Experiments never fail, it is I who have failed to set the right conditions for nature to cooperate" and then try again.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601648</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602056</id>
	<title>Re:You never discard the data</title>
	<author>Rising Ape</author>
	<datestamp>1259852760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>If the data don't make sense according to your theory, you don't discard the data, you discard the theory</i></p><p>Not really, assuming the theory is something well-established and tested. Popper oversimplified things - experimental data is rarely so unambigous that you can outright discard a reliable theory. It's much more likely that you messed up than you proved it wrong, or maybe the theory needs a fairly minor modification rather than complete rejection.</p><p>That's no reason to discard data though - not until you understand *why* the discrepancy arises, or at least have established that the data is unreliable in some other way. If, despite diligent effort, you can't find anything wrong with your analysis, then you publish and see if anyone else can explain it. The evidence may well then accumulate to the point at which the original theory is untenable, or alternatively it may demonstrate a different explanation for the original result that doesn't invalidate the theory at all.</p></htmltext>
<tokenext>If the data do n't make sense according to your theory , you do n't discard the data , you discard the theoryNot really , assuming the theory is something well-established and tested .
Popper oversimplified things - experimental data is rarely so unambigous that you can outright discard a reliable theory .
It 's much more likely that you messed up than you proved it wrong , or maybe the theory needs a fairly minor modification rather than complete rejection.That 's no reason to discard data though - not until you understand * why * the discrepancy arises , or at least have established that the data is unreliable in some other way .
If , despite diligent effort , you ca n't find anything wrong with your analysis , then you publish and see if anyone else can explain it .
The evidence may well then accumulate to the point at which the original theory is untenable , or alternatively it may demonstrate a different explanation for the original result that does n't invalidate the theory at all .</tokentext>
<sentencetext>If the data don't make sense according to your theory, you don't discard the data, you discard the theoryNot really, assuming the theory is something well-established and tested.
Popper oversimplified things - experimental data is rarely so unambigous that you can outright discard a reliable theory.
It's much more likely that you messed up than you proved it wrong, or maybe the theory needs a fairly minor modification rather than complete rejection.That's no reason to discard data though - not until you understand *why* the discrepancy arises, or at least have established that the data is unreliable in some other way.
If, despite diligent effort, you can't find anything wrong with your analysis, then you publish and see if anyone else can explain it.
The evidence may well then accumulate to the point at which the original theory is untenable, or alternatively it may demonstrate a different explanation for the original result that doesn't invalidate the theory at all.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601768</id>
	<title>Re:You never discard the data</title>
	<author>A beautiful mind</author>
	<datestamp>1259850240000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><blockquote><div><p>Alas, too many people who call themselves scientists are more interested in proving their pet theory true than in finding out what's actually going on.</p></div></blockquote><p>
It's just a result of how science is performed. Science doesn't have low hanging fruits anymore, consequently any problem that someone investigates takes dedication, because it's intellectually hard or takes lots of effort or both. Most people aren't going to be motivated enough to put that much effort into it without already having an axe to grind, a point to prove, a pet theory to push into the limelight.<br> <br>
Also, in a lot of cases you don't know there is something interesting in the area you're looking at. I think what separates bad scientists from good scientists is how you realize when something doesn't match up to your preconceived notions and how you recover from conflicting data.</p></div>
	</htmltext>
<tokenext>Alas , too many people who call themselves scientists are more interested in proving their pet theory true than in finding out what 's actually going on .
It 's just a result of how science is performed .
Science does n't have low hanging fruits anymore , consequently any problem that someone investigates takes dedication , because it 's intellectually hard or takes lots of effort or both .
Most people are n't going to be motivated enough to put that much effort into it without already having an axe to grind , a point to prove , a pet theory to push into the limelight .
Also , in a lot of cases you do n't know there is something interesting in the area you 're looking at .
I think what separates bad scientists from good scientists is how you realize when something does n't match up to your preconceived notions and how you recover from conflicting data .</tokentext>
<sentencetext>Alas, too many people who call themselves scientists are more interested in proving their pet theory true than in finding out what's actually going on.
It's just a result of how science is performed.
Science doesn't have low hanging fruits anymore, consequently any problem that someone investigates takes dedication, because it's intellectually hard or takes lots of effort or both.
Most people aren't going to be motivated enough to put that much effort into it without already having an axe to grind, a point to prove, a pet theory to push into the limelight.
Also, in a lot of cases you don't know there is something interesting in the area you're looking at.
I think what separates bad scientists from good scientists is how you realize when something doesn't match up to your preconceived notions and how you recover from conflicting data.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602540</id>
	<title>Re:Ridiculous</title>
	<author>Anonymous</author>
	<datestamp>1259857500000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Lesson #1 for any scientist:</p><p>Nature does whatever the hell it wants to. The best that a scientist can ever do is try to model it. If the model doesn't fit, it is wrong. Nature is <b>always</b> right.</p><p>Lesson #2 for any scientist:</p><p>Don't discard data. If there is a systematic error, then verify that it skewed the data and try to get data without that error. If there is no significant systematic error, then the data is correct.</p><p>Lesson #3 for any scientist:</p><p>Be humble. Pride, past successes, and the need for it to be right do not affect whether a theory is correct.</p></htmltext>
<tokenext>Lesson # 1 for any scientist : Nature does whatever the hell it wants to .
The best that a scientist can ever do is try to model it .
If the model does n't fit , it is wrong .
Nature is always right.Lesson # 2 for any scientist : Do n't discard data .
If there is a systematic error , then verify that it skewed the data and try to get data without that error .
If there is no significant systematic error , then the data is correct.Lesson # 3 for any scientist : Be humble .
Pride , past successes , and the need for it to be right do not affect whether a theory is correct .</tokentext>
<sentencetext>Lesson #1 for any scientist:Nature does whatever the hell it wants to.
The best that a scientist can ever do is try to model it.
If the model doesn't fit, it is wrong.
Nature is always right.Lesson #2 for any scientist:Don't discard data.
If there is a systematic error, then verify that it skewed the data and try to get data without that error.
If there is no significant systematic error, then the data is correct.Lesson #3 for any scientist:Be humble.
Pride, past successes, and the need for it to be right do not affect whether a theory is correct.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602022</id>
	<title>Re:Ridiculous</title>
	<author>rbannon</author>
	<datestamp>1259852400000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Not religion, but federally funded dogma. More than 20 years ago I became aware of how dogma gets grounded in fundamental research: you need to write grants that fit the dogma. One hapless soul actually stood up during a big AIDS conference and suggested that the researchers were mere lemmings. He, of course, was shouted down, but he was only trying to tell the lemmings to keep an open mind. Fast forward 20+ years and the lemmings are still in control.</p><p>Our educational system is totally broken when the educated just want things to fit. Even in mathematics, we're promoting a crop of "just tell me what to do!"</p></htmltext>
<tokenext>Not religion , but federally funded dogma .
More than 20 years ago I became aware of how dogma gets grounded in fundamental research : you need to write grants that fit the dogma .
One hapless soul actually stood up during a big AIDS conference and suggested that the researchers were mere lemmings .
He , of course , was shouted down , but he was only trying to tell the lemmings to keep an open mind .
Fast forward 20 + years and the lemmings are still in control.Our educational system is totally broken when the educated just want things to fit .
Even in mathematics , we 're promoting a crop of " just tell me what to do !
"</tokentext>
<sentencetext>Not religion, but federally funded dogma.
More than 20 years ago I became aware of how dogma gets grounded in fundamental research: you need to write grants that fit the dogma.
One hapless soul actually stood up during a big AIDS conference and suggested that the researchers were mere lemmings.
He, of course, was shouted down, but he was only trying to tell the lemmings to keep an open mind.
Fast forward 20+ years and the lemmings are still in control.Our educational system is totally broken when the educated just want things to fit.
Even in mathematics, we're promoting a crop of "just tell me what to do!
"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601514</id>
	<title>Why most scientists and engineers screw up</title>
	<author>Anonymous</author>
	<datestamp>1259847960000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>-1</modscore>
	<htmltext>It's because they are not very good. We could use 75\% fewer, and we'd get more done. They picked the wrong major in college.</htmltext>
<tokenext>It 's because they are not very good .
We could use 75 \ % fewer , and we 'd get more done .
They picked the wrong major in college .</tokentext>
<sentencetext>It's because they are not very good.
We could use 75\% fewer, and we'd get more done.
They picked the wrong major in college.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602844</id>
	<title>Re:Ridiculous</title>
	<author>DriedClexler</author>
	<datestamp>1259861280000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Very true.  Sometimes the data really are wrong.  (The <a href="http://en.wikipedia.org/wiki/Principle\_of\_Minimum\_Discrimination\_Information#Principle\_of\_minimum\_discrimination\_information" title="wikipedia.org">Minimum Discrimination Information criterion</a> [wikipedia.org] is a way of rigorously answering the question of whether your data or your theory is in error.)</p><p>But simply <i>throwing out</i> the data is the 100\% <b>wrong</b> thing to do.  That destroys the information that would have eventually told you if you were really doing something wrong in your experiment, or if you've discovered something new.</p><p>It also creates an information cascade-type situation where everyone culls any non-conforming data they see, and then all available data is conforming, which makes later scientists more skeptical of future non-conforming data, and so on.  This cascade can make it so that even a randomly-chosen theory could be supported by the literature, even in the face of being completely unrelated to reality.</p><p>Supposedly, that's what happened with the Millikan oil drop experiment and everyone biasing themselves in the direction of his initial, wrong value for electron charge.</p></htmltext>
<tokenext>Very true .
Sometimes the data really are wrong .
( The Minimum Discrimination Information criterion [ wikipedia.org ] is a way of rigorously answering the question of whether your data or your theory is in error .
) But simply throwing out the data is the 100 \ % wrong thing to do .
That destroys the information that would have eventually told you if you were really doing something wrong in your experiment , or if you 've discovered something new.It also creates an information cascade-type situation where everyone culls any non-conforming data they see , and then all available data is conforming , which makes later scientists more skeptical of future non-conforming data , and so on .
This cascade can make it so that even a randomly-chosen theory could be supported by the literature , even in the face of being completely unrelated to reality.Supposedly , that 's what happened with the Millikan oil drop experiment and everyone biasing themselves in the direction of his initial , wrong value for electron charge .</tokentext>
<sentencetext>Very true.
Sometimes the data really are wrong.
(The Minimum Discrimination Information criterion [wikipedia.org] is a way of rigorously answering the question of whether your data or your theory is in error.
)But simply throwing out the data is the 100\% wrong thing to do.
That destroys the information that would have eventually told you if you were really doing something wrong in your experiment, or if you've discovered something new.It also creates an information cascade-type situation where everyone culls any non-conforming data they see, and then all available data is conforming, which makes later scientists more skeptical of future non-conforming data, and so on.
This cascade can make it so that even a randomly-chosen theory could be supported by the literature, even in the face of being completely unrelated to reality.Supposedly, that's what happened with the Millikan oil drop experiment and everyone biasing themselves in the direction of his initial, wrong value for electron charge.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601660</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601990</id>
	<title>posting to undo bad mod</title>
	<author>chris mazuc</author>
	<datestamp>1259852160000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><p>...</p></htmltext>
<tokenext>.. .</tokentext>
<sentencetext>...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601912</id>
	<title>Re:Ridiculous</title>
	<author>Anonymous</author>
	<datestamp>1259851200000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Yeah, I remember when I was in college I used to throw a lot of dates out because they didn't make any sense.<br>Usually the dates threw me out, but that is another thing.</p></htmltext>
<tokenext>Yeah , I remember when I was in college I used to throw a lot of dates out because they did n't make any sense.Usually the dates threw me out , but that is another thing .</tokentext>
<sentencetext>Yeah, I remember when I was in college I used to throw a lot of dates out because they didn't make any sense.Usually the dates threw me out, but that is another thing.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601662</id>
	<title>Good!</title>
	<author>RyanFenton</author>
	<datestamp>1259849400000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>If problems occur as you postulate elaborate hypothesis, then stop piling up the elaborate hypothesis!  But be sure and still make available your existing (complex) hypothesis, methodology and unexpected data - preventing others from going down the same path with the same methodology is still highly valuable!</p><p>Let's say you're looking at a production and consumption cycle involving neurotransmitters and neuroreceptors of some sort, and the various channels of input and output involved.  Your starting presumption you base your hypothesis on is that there is a buildup which triggers an electrical signal to stop consumption and clear the channel.  The only evidence you can realistically gather for now is protein density at a certain output channel - but others have worked to ensure this is a reliable approach specifically under these circumstances.</p><p>So, you do the specific experiment, trigger the signal, but you get a wildly different result - the stop in consumption occurs, but the protein density does not change at all in the output channel.  What actually happened is still unknown, only you haven't verified any correlation with your hypothesis.  You still have valuable data, but no mechanism to verify under the circumstances.  Either your methodology failed, or you misunderstood what was happening - and the world of knowledge is made larger by either... even if your paymasters won't get happy about the result.</p><p>Science is often like throwing pebbles in complete darkness - it takes a lot of stones and close listening to make out a mental picture of the scene - especially when there's a lot of noise already around.  Everyone would love it if we could just flip the lights on - but we have yet to invent a light that can see into the inner workings of the functioning brain very well.  Gotta keep throwing those pebbles for now.</p><p>Ryan Fenton</p></htmltext>
<tokenext>If problems occur as you postulate elaborate hypothesis , then stop piling up the elaborate hypothesis !
But be sure and still make available your existing ( complex ) hypothesis , methodology and unexpected data - preventing others from going down the same path with the same methodology is still highly valuable ! Let 's say you 're looking at a production and consumption cycle involving neurotransmitters and neuroreceptors of some sort , and the various channels of input and output involved .
Your starting presumption you base your hypothesis on is that there is a buildup which triggers an electrical signal to stop consumption and clear the channel .
The only evidence you can realistically gather for now is protein density at a certain output channel - but others have worked to ensure this is a reliable approach specifically under these circumstances.So , you do the specific experiment , trigger the signal , but you get a wildly different result - the stop in consumption occurs , but the protein density does not change at all in the output channel .
What actually happened is still unknown , only you have n't verified any correlation with your hypothesis .
You still have valuable data , but no mechanism to verify under the circumstances .
Either your methodology failed , or you misunderstood what was happening - and the world of knowledge is made larger by either... even if your paymasters wo n't get happy about the result.Science is often like throwing pebbles in complete darkness - it takes a lot of stones and close listening to make out a mental picture of the scene - especially when there 's a lot of noise already around .
Everyone would love it if we could just flip the lights on - but we have yet to invent a light that can see into the inner workings of the functioning brain very well .
Got ta keep throwing those pebbles for now.Ryan Fenton</tokentext>
<sentencetext>If problems occur as you postulate elaborate hypothesis, then stop piling up the elaborate hypothesis!
But be sure and still make available your existing (complex) hypothesis, methodology and unexpected data - preventing others from going down the same path with the same methodology is still highly valuable!Let's say you're looking at a production and consumption cycle involving neurotransmitters and neuroreceptors of some sort, and the various channels of input and output involved.
Your starting presumption you base your hypothesis on is that there is a buildup which triggers an electrical signal to stop consumption and clear the channel.
The only evidence you can realistically gather for now is protein density at a certain output channel - but others have worked to ensure this is a reliable approach specifically under these circumstances.So, you do the specific experiment, trigger the signal, but you get a wildly different result - the stop in consumption occurs, but the protein density does not change at all in the output channel.
What actually happened is still unknown, only you haven't verified any correlation with your hypothesis.
You still have valuable data, but no mechanism to verify under the circumstances.
Either your methodology failed, or you misunderstood what was happening - and the world of knowledge is made larger by either... even if your paymasters won't get happy about the result.Science is often like throwing pebbles in complete darkness - it takes a lot of stones and close listening to make out a mental picture of the scene - especially when there's a lot of noise already around.
Everyone would love it if we could just flip the lights on - but we have yet to invent a light that can see into the inner workings of the functioning brain very well.
Gotta keep throwing those pebbles for now.Ryan Fenton</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601988</id>
	<title>The problem... (maybe?)</title>
	<author>pieisgood</author>
	<datestamp>1259852100000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext>I can't help but think that Neuroscience needs to calm down, sit back, and take a deep breath. We are examining a system and we are trying to reverse engineer it. We can't start out by trying to create elaborate hypothesis for large systems, we need to go low level and examine the simpler systems.

I really think they should hold on to the higher cognitive models for a later time because we can't even completely model C. Elegans and it has the least neurons of any, current, living organism.

The way I see it, I total expect their hypothesis to be wrong, because they don't thoroughly understand the low end of the system.</htmltext>
<tokenext>I ca n't help but think that Neuroscience needs to calm down , sit back , and take a deep breath .
We are examining a system and we are trying to reverse engineer it .
We ca n't start out by trying to create elaborate hypothesis for large systems , we need to go low level and examine the simpler systems .
I really think they should hold on to the higher cognitive models for a later time because we ca n't even completely model C. Elegans and it has the least neurons of any , current , living organism .
The way I see it , I total expect their hypothesis to be wrong , because they do n't thoroughly understand the low end of the system .</tokentext>
<sentencetext>I can't help but think that Neuroscience needs to calm down, sit back, and take a deep breath.
We are examining a system and we are trying to reverse engineer it.
We can't start out by trying to create elaborate hypothesis for large systems, we need to go low level and examine the simpler systems.
I really think they should hold on to the higher cognitive models for a later time because we can't even completely model C. Elegans and it has the least neurons of any, current, living organism.
The way I see it, I total expect their hypothesis to be wrong, because they don't thoroughly understand the low end of the system.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602464</id>
	<title>Re:Ridiculous</title>
	<author>Anonymous</author>
	<datestamp>1259856780000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The bulletproof, 'scientific' statement is something like "if you hook up the apparatus exactly so, then you will get these results," when you really want a meaningful statement like "cosmic ray muons travel at 0.99c."  These are very different claims.</p><p>The whole point throwing out data is because "hooking up the apparatus exactly so" does not seem to correspond to measuring the speed of cosmic ray muons.</p></htmltext>
<tokenext>The bulletproof , 'scientific ' statement is something like " if you hook up the apparatus exactly so , then you will get these results , " when you really want a meaningful statement like " cosmic ray muons travel at 0.99c .
" These are very different claims.The whole point throwing out data is because " hooking up the apparatus exactly so " does not seem to correspond to measuring the speed of cosmic ray muons .</tokentext>
<sentencetext>The bulletproof, 'scientific' statement is something like "if you hook up the apparatus exactly so, then you will get these results," when you really want a meaningful statement like "cosmic ray muons travel at 0.99c.
"  These are very different claims.The whole point throwing out data is because "hooking up the apparatus exactly so" does not seem to correspond to measuring the speed of cosmic ray muons.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30603116</id>
	<title>Re:Sometimes screwing up leads to success ...</title>
	<author>shoor</author>
	<datestamp>1259865000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>When I first heard about background radiation, I thought to myself, didn't George Gamow predict that in one of his Mr Tompkins books which I read either in high school or junior high school?  (and these books were written for juveniles).  In fact Gamow did predict it.  I read years later in Timothy Ferris's book "The Red Limit, The Search For the Edge of the Universe" that Gamow was astonished that the discoverers of the background radiation did not credit his insight.  The book also mentions various scientists who were aware of the theory predicting background radiation but who didn't make the connection.  Some of them apparently admitted feeling stupid about it afterwards.  I guess I get worked up about this because Gamow actually gave a talk at the college where I was a student perhaps a year before he died.  I had looked forward to seeing him speak but he was obviously in very bad shape.

Thinking about it now, I don't know if it was in the Mr Tompkins books specifically, but it was in a book of Gamow's written for laymen.  I remember he had something he called ylem.  So, if I, a layman could make the connection as soon as I heard about the detection of the background radiation, how come the experts couldn't?  This is something that used to amaze me, but I've come across enough stories of experts screwing up over the years (just saw a documentary about Bernard Madoff for instance), that nowadays I'm less amazed.  Still, what can you do?  Nothing is certain; we're always playing the odds with any information we're given.</htmltext>
<tokenext>When I first heard about background radiation , I thought to myself , did n't George Gamow predict that in one of his Mr Tompkins books which I read either in high school or junior high school ?
( and these books were written for juveniles ) .
In fact Gamow did predict it .
I read years later in Timothy Ferris 's book " The Red Limit , The Search For the Edge of the Universe " that Gamow was astonished that the discoverers of the background radiation did not credit his insight .
The book also mentions various scientists who were aware of the theory predicting background radiation but who did n't make the connection .
Some of them apparently admitted feeling stupid about it afterwards .
I guess I get worked up about this because Gamow actually gave a talk at the college where I was a student perhaps a year before he died .
I had looked forward to seeing him speak but he was obviously in very bad shape .
Thinking about it now , I do n't know if it was in the Mr Tompkins books specifically , but it was in a book of Gamow 's written for laymen .
I remember he had something he called ylem .
So , if I , a layman could make the connection as soon as I heard about the detection of the background radiation , how come the experts could n't ?
This is something that used to amaze me , but I 've come across enough stories of experts screwing up over the years ( just saw a documentary about Bernard Madoff for instance ) , that nowadays I 'm less amazed .
Still , what can you do ?
Nothing is certain ; we 're always playing the odds with any information we 're given .</tokentext>
<sentencetext>When I first heard about background radiation, I thought to myself, didn't George Gamow predict that in one of his Mr Tompkins books which I read either in high school or junior high school?
(and these books were written for juveniles).
In fact Gamow did predict it.
I read years later in Timothy Ferris's book "The Red Limit, The Search For the Edge of the Universe" that Gamow was astonished that the discoverers of the background radiation did not credit his insight.
The book also mentions various scientists who were aware of the theory predicting background radiation but who didn't make the connection.
Some of them apparently admitted feeling stupid about it afterwards.
I guess I get worked up about this because Gamow actually gave a talk at the college where I was a student perhaps a year before he died.
I had looked forward to seeing him speak but he was obviously in very bad shape.
Thinking about it now, I don't know if it was in the Mr Tompkins books specifically, but it was in a book of Gamow's written for laymen.
I remember he had something he called ylem.
So, if I, a layman could make the connection as soon as I heard about the detection of the background radiation, how come the experts couldn't?
This is something that used to amaze me, but I've come across enough stories of experts screwing up over the years (just saw a documentary about Bernard Madoff for instance), that nowadays I'm less amazed.
Still, what can you do?
Nothing is certain; we're always playing the odds with any information we're given.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601542</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602064</id>
	<title>Re:You never discard the data</title>
	<author>nedlohs</author>
	<datestamp>1259852820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Most of the time that the data doesn't make sense it's because the scientist fucked up and didn't calibrate the sensor, or mislabeled a sample, or made a tpyo.</p><p>But yes, discarding it is the wrong thing to do. Repeating it again (and if you now get what you were expected, doing it a third time) is usually the thing to do.</p><p>Of course if it is expensive, then just write down the results the theory predicts ith a fudge factor for error. Better to retard humanity's knowledge of the world than to maybe look silly.</p></htmltext>
<tokenext>Most of the time that the data does n't make sense it 's because the scientist fucked up and did n't calibrate the sensor , or mislabeled a sample , or made a tpyo.But yes , discarding it is the wrong thing to do .
Repeating it again ( and if you now get what you were expected , doing it a third time ) is usually the thing to do.Of course if it is expensive , then just write down the results the theory predicts ith a fudge factor for error .
Better to retard humanity 's knowledge of the world than to maybe look silly .</tokentext>
<sentencetext>Most of the time that the data doesn't make sense it's because the scientist fucked up and didn't calibrate the sensor, or mislabeled a sample, or made a tpyo.But yes, discarding it is the wrong thing to do.
Repeating it again (and if you now get what you were expected, doing it a third time) is usually the thing to do.Of course if it is expensive, then just write down the results the theory predicts ith a fudge factor for error.
Better to retard humanity's knowledge of the world than to maybe look silly.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30603388</id>
	<title>Re:Ridiculous</title>
	<author>dcollins</author>
	<datestamp>1259868660000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>This is well known and called the "file drawer effect" (or publication bias).</p><p><a href="http://en.wikipedia.org/wiki/Publication\_bias" title="wikipedia.org">http://en.wikipedia.org/wiki/Publication\_bias</a> [wikipedia.org]</p><p>"In September 2004, editors of several prominent medical journals (including the New England Journal of Medicine, The Lancet, Annals of Internal Medicine, and JAMA) announced that they would no longer publish results of drug research sponsored by pharmaceutical companies unless that research was registered in a public database from the start.[11] In this way, negative results should no longer be able to disappear."</p></htmltext>
<tokenext>This is well known and called the " file drawer effect " ( or publication bias ) .http : //en.wikipedia.org/wiki/Publication \ _bias [ wikipedia.org ] " In September 2004 , editors of several prominent medical journals ( including the New England Journal of Medicine , The Lancet , Annals of Internal Medicine , and JAMA ) announced that they would no longer publish results of drug research sponsored by pharmaceutical companies unless that research was registered in a public database from the start .
[ 11 ] In this way , negative results should no longer be able to disappear .
"</tokentext>
<sentencetext>This is well known and called the "file drawer effect" (or publication bias).http://en.wikipedia.org/wiki/Publication\_bias [wikipedia.org]"In September 2004, editors of several prominent medical journals (including the New England Journal of Medicine, The Lancet, Annals of Internal Medicine, and JAMA) announced that they would no longer publish results of drug research sponsored by pharmaceutical companies unless that research was registered in a public database from the start.
[11] In this way, negative results should no longer be able to disappear.
"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601648</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30603340</id>
	<title>Re:Or you can edit your data....</title>
	<author>Anonymous</author>
	<datestamp>1259868180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>The "sceptic" arguments are nothing but a parade of cherry picking with little attempt at genuine investigation.</p></div><p>Only if you don't actually look around. <a href="http://online.wsj.com/article/SB10001424052748703939404574567423917025400.html" title="wsj.com">Richard Lindzen</a> [wsj.com] is a climate researcher at MIT, and has investigated it well (he was one of the authors of the IPCC report).  His argument is that there is no strong evidence linking anthropogenic CO2 and a global crisis.<br> <br>
And he is right.  <a href="http://www.ipcc.ch/" title="www.ipcc.ch">Check out the evidence for yourself</a> [www.ipcc.ch]. Look at it critically, and try to see if they can establish a link.  They can't.</p></div>
	</htmltext>
<tokenext>The " sceptic " arguments are nothing but a parade of cherry picking with little attempt at genuine investigation.Only if you do n't actually look around .
Richard Lindzen [ wsj.com ] is a climate researcher at MIT , and has investigated it well ( he was one of the authors of the IPCC report ) .
His argument is that there is no strong evidence linking anthropogenic CO2 and a global crisis .
And he is right .
Check out the evidence for yourself [ www.ipcc.ch ] .
Look at it critically , and try to see if they can establish a link .
They ca n't .</tokentext>
<sentencetext>The "sceptic" arguments are nothing but a parade of cherry picking with little attempt at genuine investigation.Only if you don't actually look around.
Richard Lindzen [wsj.com] is a climate researcher at MIT, and has investigated it well (he was one of the authors of the IPCC report).
His argument is that there is no strong evidence linking anthropogenic CO2 and a global crisis.
And he is right.
Check out the evidence for yourself [www.ipcc.ch].
Look at it critically, and try to see if they can establish a link.
They can't.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602002</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602030</id>
	<title>Re:Ridiculous</title>
	<author>KliX</author>
	<datestamp>1259852460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If the search space was that simple, we'd know everything by now.</p></htmltext>
<tokenext>If the search space was that simple , we 'd know everything by now .</tokentext>
<sentencetext>If the search space was that simple, we'd know everything by now.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601648</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601648</id>
	<title>Re:Ridiculous</title>
	<author>Anonymous</author>
	<datestamp>1259849160000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>And this is what bothers me. If you are willing to run an experiment enough times, you will eventually get data to support your assertions. Get a statistical 90\% certainty, and it could be that you ran the scenario 100 times, and throw out the 99 times that did not give you this certainty. The scientific process is bullet proof. The folks who "do science" not necessarily so.</p></htmltext>
<tokenext>And this is what bothers me .
If you are willing to run an experiment enough times , you will eventually get data to support your assertions .
Get a statistical 90 \ % certainty , and it could be that you ran the scenario 100 times , and throw out the 99 times that did not give you this certainty .
The scientific process is bullet proof .
The folks who " do science " not necessarily so .</tokentext>
<sentencetext>And this is what bothers me.
If you are willing to run an experiment enough times, you will eventually get data to support your assertions.
Get a statistical 90\% certainty, and it could be that you ran the scenario 100 times, and throw out the 99 times that did not give you this certainty.
The scientific process is bullet proof.
The folks who "do science" not necessarily so.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30605538</id>
	<title>Re:Ridiculous</title>
	<author>KnownIssues</author>
	<datestamp>1262276940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>The article says nothing about scientists keeping bad data simply to support a hypothesis. This is what separates it from religion.</htmltext>
<tokenext>The article says nothing about scientists keeping bad data simply to support a hypothesis .
This is what separates it from religion .</tokentext>
<sentencetext>The article says nothing about scientists keeping bad data simply to support a hypothesis.
This is what separates it from religion.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30623850</id>
	<title>Re:Or you can edit your data....</title>
	<author>Anonymous</author>
	<datestamp>1230921180000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Those "sceptic" arguments just plain stink.</p></htmltext>
<tokenext>Those " sceptic " arguments just plain stink .</tokentext>
<sentencetext>Those "sceptic" arguments just plain stink.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602002</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601660</id>
	<title>Re:Ridiculous</title>
	<author>Shadow of Eternity</author>
	<datestamp>1259849340000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>Not always, sometimes your data doesn't make sense because you made a mistake somewhere that wound up turning your results into garbage.</p></htmltext>
<tokenext>Not always , sometimes your data does n't make sense because you made a mistake somewhere that wound up turning your results into garbage .</tokentext>
<sentencetext>Not always, sometimes your data doesn't make sense because you made a mistake somewhere that wound up turning your results into garbage.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30604956
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_38</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30603556
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601988
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602030
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601648
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602844
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601660
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30603116
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601542
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_43</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601912
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602464
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601956
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601648
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602362
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601768
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602592
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601662
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_35</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601816
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602850
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601872
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_42</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30603274
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601988
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602250
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602922
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601648
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_41</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602056
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602540
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30605088
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601988
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602022
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30606932
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601542
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_40</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601974
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601648
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30623850
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602002
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601784
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602228
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601872
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602232
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601872
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_46</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30603048
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602892
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601872
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30611738
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601542
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_45</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30604472
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_36</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602156
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30604426
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30603340
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602002
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601784
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30605872
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_37</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602438
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601648
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_39</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602140
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_44</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30605538
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602200
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602970
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602108
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601648
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601850
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602016
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602064
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602694
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601916
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601648
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30603388
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601648
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602568
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30605532
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601988
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602524
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30606876
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_30_2321238_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601694
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_30_2321238.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601530
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_30_2321238.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601662
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602592
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_30_2321238.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601542
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30611738
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30606932
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30603116
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_30_2321238.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601586
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602694
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602200
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602056
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601816
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602156
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601768
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602362
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30605872
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601850
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602064
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601872
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602850
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602228
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602232
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602892
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30603048
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30604956
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30604472
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_30_2321238.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601656
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_30_2321238.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601988
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30605088
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30603274
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30605532
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30603556
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_30_2321238.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601514
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_30_2321238.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601784
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602002
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30623850
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30603340
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_30_2321238.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601566
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602540
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602524
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602022
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602250
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601694
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602464
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602140
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601648
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602030
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602108
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602970
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602922
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30603388
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601956
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601916
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601974
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602438
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30604426
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30605538
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601912
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30601660
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602844
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602016
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30606876
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_30_2321238.30602568
</commentlist>
</conversation>
