<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_07_08_1526230</id>
	<title>This Is Your Brain On Magnets &mdash; Or Maybe Not</title>
	<author>Soulskill</author>
	<datestamp>1247067060000</datestamp>
	<htmltext>conspirator23 writes <i>"Jon Hamilton of National Public Radio brings us a story about <a href="http://www.npr.org/templates/story/story.php?storyId=106235924&amp;ft=1&amp;f=2">'voodoo correlations' in fMRI studies</a> that seek to learn more about emotional states, personality, and social cognition in the human brain. Many of us outside the scientific community have been treated to fascinating images of brain activity and corresponding explanations about how the images reveal which portions of the brain are engaged in certain kinds of thinking. But <a href="http://www.edvul.com/voodoocorr.php">these images are not actual snapshots</a>; they are visualizations of data generated by repeated scans during experiments. Flaws in the statistical methods used by researchers can result in false images with a variety of inaccuracies. Yet the images produced are so vivid and engaging that even other neuroscientists can be misled by them."</i></htmltext>
<tokenext>conspirator23 writes " Jon Hamilton of National Public Radio brings us a story about 'voodoo correlations ' in fMRI studies that seek to learn more about emotional states , personality , and social cognition in the human brain .
Many of us outside the scientific community have been treated to fascinating images of brain activity and corresponding explanations about how the images reveal which portions of the brain are engaged in certain kinds of thinking .
But these images are not actual snapshots ; they are visualizations of data generated by repeated scans during experiments .
Flaws in the statistical methods used by researchers can result in false images with a variety of inaccuracies .
Yet the images produced are so vivid and engaging that even other neuroscientists can be misled by them .
"</tokentext>
<sentencetext>conspirator23 writes "Jon Hamilton of National Public Radio brings us a story about 'voodoo correlations' in fMRI studies that seek to learn more about emotional states, personality, and social cognition in the human brain.
Many of us outside the scientific community have been treated to fascinating images of brain activity and corresponding explanations about how the images reveal which portions of the brain are engaged in certain kinds of thinking.
But these images are not actual snapshots; they are visualizations of data generated by repeated scans during experiments.
Flaws in the statistical methods used by researchers can result in false images with a variety of inaccuracies.
Yet the images produced are so vivid and engaging that even other neuroscientists can be misled by them.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28623229</id>
	<title>Really Useful?</title>
	<author>squoozer</author>
	<datestamp>1247071440000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>I've always wondered how useful these images really are. Perhaps to the trained eye they can reveal a lot about how a persons brain works but they have always struck me as being too abstract. We can point at a portion of the image and say that bit controls movement, for example, but if anything goes wrong we are stuck because at a fundamental level we don't understand how it controls movement. I suppose it's a bit like looking at a block diagram for a CPU and not understanding how each bit works.</p><p>It will be interesting to see how we achieve the next level of understanding of the brains functioning. I can't see that we will ever get there with MRI or electrode probes because, I think, they are simply too large to get a true understanding of what is going on. I suspect we will gain our understanding through modelling but I'm not sure I'll be around when we do.</p></htmltext>
<tokenext>I 've always wondered how useful these images really are .
Perhaps to the trained eye they can reveal a lot about how a persons brain works but they have always struck me as being too abstract .
We can point at a portion of the image and say that bit controls movement , for example , but if anything goes wrong we are stuck because at a fundamental level we do n't understand how it controls movement .
I suppose it 's a bit like looking at a block diagram for a CPU and not understanding how each bit works.It will be interesting to see how we achieve the next level of understanding of the brains functioning .
I ca n't see that we will ever get there with MRI or electrode probes because , I think , they are simply too large to get a true understanding of what is going on .
I suspect we will gain our understanding through modelling but I 'm not sure I 'll be around when we do .</tokentext>
<sentencetext>I've always wondered how useful these images really are.
Perhaps to the trained eye they can reveal a lot about how a persons brain works but they have always struck me as being too abstract.
We can point at a portion of the image and say that bit controls movement, for example, but if anything goes wrong we are stuck because at a fundamental level we don't understand how it controls movement.
I suppose it's a bit like looking at a block diagram for a CPU and not understanding how each bit works.It will be interesting to see how we achieve the next level of understanding of the brains functioning.
I can't see that we will ever get there with MRI or electrode probes because, I think, they are simply too large to get a true understanding of what is going on.
I suspect we will gain our understanding through modelling but I'm not sure I'll be around when we do.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28625639</id>
	<title>Lite-Brite Phrenology</title>
	<author>Anonymous</author>
	<datestamp>1247079840000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Just wanted to mention my favorite diss of fMRI - Lite-Brite Phrenology.</p></htmltext>
<tokenext>Just wanted to mention my favorite diss of fMRI - Lite-Brite Phrenology .</tokentext>
<sentencetext>Just wanted to mention my favorite diss of fMRI - Lite-Brite Phrenology.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28623109</id>
	<title>Voodoo + brains =</title>
	<author>Anonymous</author>
	<datestamp>1247070960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>There's a "zombies" joke in there somewhere.</p></htmltext>
<tokenext>There 's a " zombies " joke in there somewhere .</tokentext>
<sentencetext>There's a "zombies" joke in there somewhere.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28626973</id>
	<title>Re:Voodoo + brains =</title>
	<author>db32</author>
	<datestamp>1247084520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Don't worry.  It has been buried.</htmltext>
<tokenext>Do n't worry .
It has been buried .</tokentext>
<sentencetext>Don't worry.
It has been buried.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28623109</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28624055</id>
	<title>Lucky nobody's getting carried away with this</title>
	<author>wjousts</author>
	<datestamp>1247074500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Oh wait <a href="http://www.nytimes.com/2008/09/15/world/asia/15brainscan.html?\_r=2&amp;oref=slogin" title="nytimes.com">yes they are!</a> [nytimes.com] </p></htmltext>
<tokenext>Oh wait yes they are !
[ nytimes.com ]</tokentext>
<sentencetext>Oh wait yes they are!
[nytimes.com] </sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28629753</id>
	<title>In soviet russia...</title>
	<author>Anonymous</author>
	<datestamp>1247055180000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>MRI f you!</htmltext>
<tokenext>MRI f you !</tokentext>
<sentencetext>MRI f you!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28732173</id>
	<title>Re:</title>
	<author>clint999</author>
	<datestamp>1247855400000</datestamp>
	<modclass>None</modclass>
	<modscore>-1</modscore>
	<htmltext><p><div class="quote"><p>If little magnets were going to affect your brain, wouldn't anyone who'd had a brain scan end up a vegetable?Of course it pays to sell idiots little magnets and claim all sorts of health benefits. Some may even benefit from a placebo effect. (It doesn't pay to try to sell them MRI machines...there are so few idiots THAT rich).I think i'll remain skeptical unless more solid evidence turns up.</p></div></div>
	</htmltext>
<tokenext>If little magnets were going to affect your brain , would n't anyone who 'd had a brain scan end up a vegetable ? Of course it pays to sell idiots little magnets and claim all sorts of health benefits .
Some may even benefit from a placebo effect .
( It does n't pay to try to sell them MRI machines...there are so few idiots THAT rich ) .I think i 'll remain skeptical unless more solid evidence turns up .</tokentext>
<sentencetext>If little magnets were going to affect your brain, wouldn't anyone who'd had a brain scan end up a vegetable?Of course it pays to sell idiots little magnets and claim all sorts of health benefits.
Some may even benefit from a placebo effect.
(It doesn't pay to try to sell them MRI machines...there are so few idiots THAT rich).I think i'll remain skeptical unless more solid evidence turns up.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28623635</id>
	<title>Re:Really Useful?</title>
	<author>msparker</author>
	<datestamp>1247072880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>I suspect we will gain our understanding through modelling but I'm not sure I'll be around when we do.</p></div><p>I agree.  I've always thought that one of Edelman's conscious artifacts, <a href="http://www.21stcentury.co.uk/robotics/nomad.asp" title="21stcentury.co.uk" rel="nofollow">http://www.21stcentury.co.uk/robotics/nomad.asp</a> [21stcentury.co.uk], would be the way in to a better understanding of the brain, but I haven't kept up with their progress.  I'm still hoping they'll find some answers while I'm around.</p></div>
	</htmltext>
<tokenext>I suspect we will gain our understanding through modelling but I 'm not sure I 'll be around when we do.I agree .
I 've always thought that one of Edelman 's conscious artifacts , http : //www.21stcentury.co.uk/robotics/nomad.asp [ 21stcentury.co.uk ] , would be the way in to a better understanding of the brain , but I have n't kept up with their progress .
I 'm still hoping they 'll find some answers while I 'm around .</tokentext>
<sentencetext>I suspect we will gain our understanding through modelling but I'm not sure I'll be around when we do.I agree.
I've always thought that one of Edelman's conscious artifacts, http://www.21stcentury.co.uk/robotics/nomad.asp [21stcentury.co.uk], would be the way in to a better understanding of the brain, but I haven't kept up with their progress.
I'm still hoping they'll find some answers while I'm around.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28623229</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28627223</id>
	<title>Re:"These images are not snapshots"? No kidding.</title>
	<author>Anonymous</author>
	<datestamp>1247085420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><nobr> <wbr></nobr></p><div class="quote"><p>...and yet, it does.  It's become so routine, so reliable, so well-understood and well-controlled, that doctors and researchers know they can rely on it as a matter of course.  They still have to be aware of the errors and distortions that can arise, but that's true of every imaging or monitoring system, all the way down to the stethoscope and the fever thermometer.</p></div><p>The problem with the activation maps is precisely that one is NOT looking at an image, so there's no way to fine tune the algorithms.  Therefore, fMRI is NOT well understood in the way that CT or MRI are.</p><p>Consider that in imaging, you have the luxury of comparing the output of a brain scan to the known physical structure of the brain.  Is there a hippocampus?  No?  Well then it didn't work, go back and fiddle until you can show me a hippocampus.</p><p>In fMRI, apart from low level sensory corticies (where visual field mapping techniques can reproduce broad level retinotopic maps), researchers are operating in a vacuum in which there is no hard and fast error signal to fine tune the methods.</p><p>Science has to proceed very cautiously in such a situation.  This is particularly true when one has hundreds of thousands of voxels to sift through because it's easy to find any pattern in noise, if you have enough noise.</p><p>So I would argue that fMRI offers a very different set of challenges compared to MRI and CT scans, and therefore it's very important to keep a sharp, critical eye on the statistics used, as these authors are doing.</p><p>To illustrate this point further, here is a link to a poster in which someone put a dead salmon into a magnet and found that (in the absence of proper statistical controls) its decomposing brain was apparently reacting to the emotional content of pictures:</p><p><a href="http://prefrontal.org/files/posters/Bennett-Salmon-2009.pdf" title="prefrontal.org">http://prefrontal.org/files/posters/Bennett-Salmon-2009.pdf</a> [prefrontal.org]</p></div>
	</htmltext>
<tokenext>...and yet , it does .
It 's become so routine , so reliable , so well-understood and well-controlled , that doctors and researchers know they can rely on it as a matter of course .
They still have to be aware of the errors and distortions that can arise , but that 's true of every imaging or monitoring system , all the way down to the stethoscope and the fever thermometer.The problem with the activation maps is precisely that one is NOT looking at an image , so there 's no way to fine tune the algorithms .
Therefore , fMRI is NOT well understood in the way that CT or MRI are.Consider that in imaging , you have the luxury of comparing the output of a brain scan to the known physical structure of the brain .
Is there a hippocampus ?
No ? Well then it did n't work , go back and fiddle until you can show me a hippocampus.In fMRI , apart from low level sensory corticies ( where visual field mapping techniques can reproduce broad level retinotopic maps ) , researchers are operating in a vacuum in which there is no hard and fast error signal to fine tune the methods.Science has to proceed very cautiously in such a situation .
This is particularly true when one has hundreds of thousands of voxels to sift through because it 's easy to find any pattern in noise , if you have enough noise.So I would argue that fMRI offers a very different set of challenges compared to MRI and CT scans , and therefore it 's very important to keep a sharp , critical eye on the statistics used , as these authors are doing.To illustrate this point further , here is a link to a poster in which someone put a dead salmon into a magnet and found that ( in the absence of proper statistical controls ) its decomposing brain was apparently reacting to the emotional content of pictures : http : //prefrontal.org/files/posters/Bennett-Salmon-2009.pdf [ prefrontal.org ]</tokentext>
<sentencetext> ...and yet, it does.
It's become so routine, so reliable, so well-understood and well-controlled, that doctors and researchers know they can rely on it as a matter of course.
They still have to be aware of the errors and distortions that can arise, but that's true of every imaging or monitoring system, all the way down to the stethoscope and the fever thermometer.The problem with the activation maps is precisely that one is NOT looking at an image, so there's no way to fine tune the algorithms.
Therefore, fMRI is NOT well understood in the way that CT or MRI are.Consider that in imaging, you have the luxury of comparing the output of a brain scan to the known physical structure of the brain.
Is there a hippocampus?
No?  Well then it didn't work, go back and fiddle until you can show me a hippocampus.In fMRI, apart from low level sensory corticies (where visual field mapping techniques can reproduce broad level retinotopic maps), researchers are operating in a vacuum in which there is no hard and fast error signal to fine tune the methods.Science has to proceed very cautiously in such a situation.
This is particularly true when one has hundreds of thousands of voxels to sift through because it's easy to find any pattern in noise, if you have enough noise.So I would argue that fMRI offers a very different set of challenges compared to MRI and CT scans, and therefore it's very important to keep a sharp, critical eye on the statistics used, as these authors are doing.To illustrate this point further, here is a link to a poster in which someone put a dead salmon into a magnet and found that (in the absence of proper statistical controls) its decomposing brain was apparently reacting to the emotional content of pictures:http://prefrontal.org/files/posters/Bennett-Salmon-2009.pdf [prefrontal.org]
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28624723</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28635377</id>
	<title>Re:"These images are not snapshots"? No kidding.</title>
	<author>Agripa</author>
	<datestamp>1247149320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Couldn't the image sampling be done synchronously with the heart beat for instance or is the integration time too long?</p></htmltext>
<tokenext>Could n't the image sampling be done synchronously with the heart beat for instance or is the integration time too long ?</tokentext>
<sentencetext>Couldn't the image sampling be done synchronously with the heart beat for instance or is the integration time too long?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28624723</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28630533</id>
	<title>I said it, I did</title>
	<author>DynaSoar</author>
	<datestamp>1247059680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Most of my psychology colleagues have no idea what they're looking at in fMRI. They assume if it lights up, it's making something go. They may know full well that neural activation can be excitatory or inhibitory, but fail to make the connection and figure out that what's lighting up may be de-activation. Both the gas pedal and the brakes appear the same to fMRI and nobody can tell which is which with this technology alone.</p><p>Even fewer even bother to try to grasp the math behind the analysis technique, statistical probability mapping. Every cubic pixel ("voxel") has to be compared to every one of its neighbors across the multiple data collections in each condition, and a T test applied. In order to prevent artifactual results due to this massive amount of statistical testing an error correcting normalization is applied. Statistical results of 0.05 or 5\% are considered the limit for acceptable results. With the correction factor applied, each voxel that lights up is passing a statistical test with anywhere up to 22 digits below the zero that I've seen myself. Their lighting-up numbers are more far fetched than being hit by a falling meteor.</p><p>They almost invariably fail to mention areas that work with an area of their interest but fail to light up when their target does.</p><p>The problem extends beyond just the researchers. A recent article in the Proceedings of the National Academy was filled with these errors yet has become a part of a well respected body of works, poisoning it and giving others cause to continue believing their fallacies and compounding them with works justified by this one.</p></htmltext>
<tokenext>Most of my psychology colleagues have no idea what they 're looking at in fMRI .
They assume if it lights up , it 's making something go .
They may know full well that neural activation can be excitatory or inhibitory , but fail to make the connection and figure out that what 's lighting up may be de-activation .
Both the gas pedal and the brakes appear the same to fMRI and nobody can tell which is which with this technology alone.Even fewer even bother to try to grasp the math behind the analysis technique , statistical probability mapping .
Every cubic pixel ( " voxel " ) has to be compared to every one of its neighbors across the multiple data collections in each condition , and a T test applied .
In order to prevent artifactual results due to this massive amount of statistical testing an error correcting normalization is applied .
Statistical results of 0.05 or 5 \ % are considered the limit for acceptable results .
With the correction factor applied , each voxel that lights up is passing a statistical test with anywhere up to 22 digits below the zero that I 've seen myself .
Their lighting-up numbers are more far fetched than being hit by a falling meteor.They almost invariably fail to mention areas that work with an area of their interest but fail to light up when their target does.The problem extends beyond just the researchers .
A recent article in the Proceedings of the National Academy was filled with these errors yet has become a part of a well respected body of works , poisoning it and giving others cause to continue believing their fallacies and compounding them with works justified by this one .</tokentext>
<sentencetext>Most of my psychology colleagues have no idea what they're looking at in fMRI.
They assume if it lights up, it's making something go.
They may know full well that neural activation can be excitatory or inhibitory, but fail to make the connection and figure out that what's lighting up may be de-activation.
Both the gas pedal and the brakes appear the same to fMRI and nobody can tell which is which with this technology alone.Even fewer even bother to try to grasp the math behind the analysis technique, statistical probability mapping.
Every cubic pixel ("voxel") has to be compared to every one of its neighbors across the multiple data collections in each condition, and a T test applied.
In order to prevent artifactual results due to this massive amount of statistical testing an error correcting normalization is applied.
Statistical results of 0.05 or 5\% are considered the limit for acceptable results.
With the correction factor applied, each voxel that lights up is passing a statistical test with anywhere up to 22 digits below the zero that I've seen myself.
Their lighting-up numbers are more far fetched than being hit by a falling meteor.They almost invariably fail to mention areas that work with an area of their interest but fail to light up when their target does.The problem extends beyond just the researchers.
A recent article in the Proceedings of the National Academy was filled with these errors yet has become a part of a well respected body of works, poisoning it and giving others cause to continue believing their fallacies and compounding them with works justified by this one.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28623399</id>
	<title>Well, what is the noise level?</title>
	<author>Khashishi</author>
	<datestamp>1247072100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It's easy to dismiss the results as noise, but any decent researcher will estimate her or his error bars, and show that the signal measured is, indeed, small or comparable to the error bars. The way the article is written, it just sounds like they just totally ignored the results because they don't like them or something. To be fair, the paper itself probably does a better job of defending its position, but I don't have time to understand all its details.</p></htmltext>
<tokenext>It 's easy to dismiss the results as noise , but any decent researcher will estimate her or his error bars , and show that the signal measured is , indeed , small or comparable to the error bars .
The way the article is written , it just sounds like they just totally ignored the results because they do n't like them or something .
To be fair , the paper itself probably does a better job of defending its position , but I do n't have time to understand all its details .</tokentext>
<sentencetext>It's easy to dismiss the results as noise, but any decent researcher will estimate her or his error bars, and show that the signal measured is, indeed, small or comparable to the error bars.
The way the article is written, it just sounds like they just totally ignored the results because they don't like them or something.
To be fair, the paper itself probably does a better job of defending its position, but I don't have time to understand all its details.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28628301</id>
	<title>MRI is one huge ass magnet</title>
	<author>syousef</author>
	<datestamp>1247047320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If little magnets were going to affect your brain, wouldn't anyone who'd had a brain scan end up a vegetable?</p><p>Of course it pays to sell idiots little magnets and claim all sorts of health benefits. Some may even benefit from a placebo effect. (It doesn't pay to try to sell them MRI machines...there are so few idiots THAT rich).</p><p>I think i'll remain skeptical unless more solid evidence turns up.</p></htmltext>
<tokenext>If little magnets were going to affect your brain , would n't anyone who 'd had a brain scan end up a vegetable ? Of course it pays to sell idiots little magnets and claim all sorts of health benefits .
Some may even benefit from a placebo effect .
( It does n't pay to try to sell them MRI machines...there are so few idiots THAT rich ) .I think i 'll remain skeptical unless more solid evidence turns up .</tokentext>
<sentencetext>If little magnets were going to affect your brain, wouldn't anyone who'd had a brain scan end up a vegetable?Of course it pays to sell idiots little magnets and claim all sorts of health benefits.
Some may even benefit from a placebo effect.
(It doesn't pay to try to sell them MRI machines...there are so few idiots THAT rich).I think i'll remain skeptical unless more solid evidence turns up.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28626521</id>
	<title>the pretty-picture effect</title>
	<author>Anonymous</author>
	<datestamp>1247082780000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I suspect the pretty-picture effect is likely to generalize to just about any science, though I am only aware of this having been directly tested and confirmed in neuroscience (e.g. McCabe &amp; Castel, 2007; Cognition). Although the philosophy behind peer review is sound, it still suffers from the fact that our peers are humans - encumbered by all manner of flawed reasoning. Unfortunately, one of those flaws is that we are easily compelled by impressive looking pictures - regardless of our expertise. Perhaps this is because we implcitly assume that impressive tools and impressive research are necessarily correlated. Sadly, they are not, and we are left with islands of extremely valuable fMRI (and equally EEG / ERP, MEG, DTI - what have you) research, floating amidst a host of absolute rubbish.</p></htmltext>
<tokenext>I suspect the pretty-picture effect is likely to generalize to just about any science , though I am only aware of this having been directly tested and confirmed in neuroscience ( e.g .
McCabe &amp; Castel , 2007 ; Cognition ) .
Although the philosophy behind peer review is sound , it still suffers from the fact that our peers are humans - encumbered by all manner of flawed reasoning .
Unfortunately , one of those flaws is that we are easily compelled by impressive looking pictures - regardless of our expertise .
Perhaps this is because we implcitly assume that impressive tools and impressive research are necessarily correlated .
Sadly , they are not , and we are left with islands of extremely valuable fMRI ( and equally EEG / ERP , MEG , DTI - what have you ) research , floating amidst a host of absolute rubbish .</tokentext>
<sentencetext>I suspect the pretty-picture effect is likely to generalize to just about any science, though I am only aware of this having been directly tested and confirmed in neuroscience (e.g.
McCabe &amp; Castel, 2007; Cognition).
Although the philosophy behind peer review is sound, it still suffers from the fact that our peers are humans - encumbered by all manner of flawed reasoning.
Unfortunately, one of those flaws is that we are easily compelled by impressive looking pictures - regardless of our expertise.
Perhaps this is because we implcitly assume that impressive tools and impressive research are necessarily correlated.
Sadly, they are not, and we are left with islands of extremely valuable fMRI (and equally EEG / ERP, MEG, DTI - what have you) research, floating amidst a host of absolute rubbish.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28623169</id>
	<title>These beautiful visualizations</title>
	<author>Anonymous</author>
	<datestamp>1247071200000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>REALLY?</p><p>*clicks link to look at pictures*</p><p>*curses internet*</p></htmltext>
<tokenext>REALLY ?
* clicks link to look at pictures * * curses internet *</tokentext>
<sentencetext>REALLY?
*clicks link to look at pictures**curses internet*</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28734569</id>
	<title>Re:</title>
	<author>clint999</author>
	<datestamp>1247823000000</datestamp>
	<modclass>None</modclass>
	<modscore>-1</modscore>
	<htmltext><strong>If little magnets were going to affect your brain, wouldn't anyone who'd had a brain scan end up a vegetable?Of course it pays to sell idiots little magnets and claim all sorts of health benefits. Some may even benefit from a placebo effect. (It doesn't pay to try to sell them MRI machines...there are so few idiots THAT rich).I think i'll remain skeptical unless more solid evidence turns up.</strong></htmltext>
<tokenext>If little magnets were going to affect your brain , would n't anyone who 'd had a brain scan end up a vegetable ? Of course it pays to sell idiots little magnets and claim all sorts of health benefits .
Some may even benefit from a placebo effect .
( It does n't pay to try to sell them MRI machines...there are so few idiots THAT rich ) .I think i 'll remain skeptical unless more solid evidence turns up .</tokentext>
<sentencetext>If little magnets were going to affect your brain, wouldn't anyone who'd had a brain scan end up a vegetable?Of course it pays to sell idiots little magnets and claim all sorts of health benefits.
Some may even benefit from a placebo effect.
(It doesn't pay to try to sell them MRI machines...there are so few idiots THAT rich).I think i'll remain skeptical unless more solid evidence turns up.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28635931</id>
	<title>Re:"These images are not snapshots"? No kidding.</title>
	<author>Ihlosi</author>
	<datestamp>1247151960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>For CT, we acquire a bunch of 2D images through you from different angles,</i> </p><p>Wasn't that "a whole bunch of 1D images, which are then computed into 2D-slices, which are then assembled into 3D volume representations"?</p><p>Last time I checked, CTs acquired one-dimensional images. However, it's been a while, and I work in a different field of biomedical engineering, so I haven't really kept up with more recent developments.</p></htmltext>
<tokenext>For CT , we acquire a bunch of 2D images through you from different angles , Was n't that " a whole bunch of 1D images , which are then computed into 2D-slices , which are then assembled into 3D volume representations " ? Last time I checked , CTs acquired one-dimensional images .
However , it 's been a while , and I work in a different field of biomedical engineering , so I have n't really kept up with more recent developments .</tokentext>
<sentencetext>For CT, we acquire a bunch of 2D images through you from different angles, Wasn't that "a whole bunch of 1D images, which are then computed into 2D-slices, which are then assembled into 3D volume representations"?Last time I checked, CTs acquired one-dimensional images.
However, it's been a while, and I work in a different field of biomedical engineering, so I haven't really kept up with more recent developments.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28624723</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28623223</id>
	<title>Pretty Familiar to Me</title>
	<author>eldavojohn</author>
	<datestamp>1247071380000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>These sort of images are pretty familiar to me and I must admit I was never skeptical of research showing that you could classify brain patterns based on the object they were looking at or how they were feeling.  I had thought this had gone so far as to be used to classify terrorists and used in trials (which is quite unnerving)!  Well, it saddens me to say this but in a field where we normally take two steps forward today, we are taking one giant step back.  The brain is such a complex thing to study concerning biology, chemistry, electromagnetic physics, psychiatry and psychology.  The line where the physical sciences stop and the psychological science starts is so blurred and confusing, it's a shame that one of the few tools used to determine the hows and whys of it is being called into question.  I think a lot of us hoped there was some hard scientific way to unravel this mystery of cognizance and conscience.  After reading the article, it's a good thing this happened but a shame for quite a bit of research out there that must now be re-examined.</htmltext>
<tokenext>These sort of images are pretty familiar to me and I must admit I was never skeptical of research showing that you could classify brain patterns based on the object they were looking at or how they were feeling .
I had thought this had gone so far as to be used to classify terrorists and used in trials ( which is quite unnerving ) !
Well , it saddens me to say this but in a field where we normally take two steps forward today , we are taking one giant step back .
The brain is such a complex thing to study concerning biology , chemistry , electromagnetic physics , psychiatry and psychology .
The line where the physical sciences stop and the psychological science starts is so blurred and confusing , it 's a shame that one of the few tools used to determine the hows and whys of it is being called into question .
I think a lot of us hoped there was some hard scientific way to unravel this mystery of cognizance and conscience .
After reading the article , it 's a good thing this happened but a shame for quite a bit of research out there that must now be re-examined .</tokentext>
<sentencetext>These sort of images are pretty familiar to me and I must admit I was never skeptical of research showing that you could classify brain patterns based on the object they were looking at or how they were feeling.
I had thought this had gone so far as to be used to classify terrorists and used in trials (which is quite unnerving)!
Well, it saddens me to say this but in a field where we normally take two steps forward today, we are taking one giant step back.
The brain is such a complex thing to study concerning biology, chemistry, electromagnetic physics, psychiatry and psychology.
The line where the physical sciences stop and the psychological science starts is so blurred and confusing, it's a shame that one of the few tools used to determine the hows and whys of it is being called into question.
I think a lot of us hoped there was some hard scientific way to unravel this mystery of cognizance and conscience.
After reading the article, it's a good thing this happened but a shame for quite a bit of research out there that must now be re-examined.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28624173</id>
	<title>Can I get High with Magnets????</title>
	<author>jameskojiro</author>
	<datestamp>1247074980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If I buy some of those powerful Neodymium magnets and move them over my head can I get high that way?</p><p>I would like to know because that would same me a lot of money, or would I have to keep buying more and more powerful magnets to keep achieving the same "high" effect?</p></htmltext>
<tokenext>If I buy some of those powerful Neodymium magnets and move them over my head can I get high that way ? I would like to know because that would same me a lot of money , or would I have to keep buying more and more powerful magnets to keep achieving the same " high " effect ?</tokentext>
<sentencetext>If I buy some of those powerful Neodymium magnets and move them over my head can I get high that way?I would like to know because that would same me a lot of money, or would I have to keep buying more and more powerful magnets to keep achieving the same "high" effect?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28624041</id>
	<title>No good neuroscientist is going purely off fMRI</title>
	<author>LockeOnLogic</author>
	<datestamp>1247074500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>fMRI is one of many imaging techniques which continue to evolve and give us more and more amazing data about the brain. But its just data, and subject to the limitations of its tech. Any neuroscientist worth their salt knows the limitations of their technology. The problem desribed in the article is something people have been dicussing as a valid methodological criticism of some studies, not all fMRI data. The summary is misleading, basically its saying a tool used incorrectly results in bad data. Duh.
<br> <br>
fMRI has great spatial but bad temporal resolution<br>
EEG and MEG and great temporal resolution but bad spatial<br>
PET has amazing metabolic resoultion but fuzzy temporal and spatial resolution<br> <br>

These are a few examples. The media and bad researchers can't get past the pretty visualization but fit into a proper theoretical model of exploration it still remains one of the most amazing tools of brain exploration thus far.</htmltext>
<tokenext>fMRI is one of many imaging techniques which continue to evolve and give us more and more amazing data about the brain .
But its just data , and subject to the limitations of its tech .
Any neuroscientist worth their salt knows the limitations of their technology .
The problem desribed in the article is something people have been dicussing as a valid methodological criticism of some studies , not all fMRI data .
The summary is misleading , basically its saying a tool used incorrectly results in bad data .
Duh . fMRI has great spatial but bad temporal resolution EEG and MEG and great temporal resolution but bad spatial PET has amazing metabolic resoultion but fuzzy temporal and spatial resolution These are a few examples .
The media and bad researchers ca n't get past the pretty visualization but fit into a proper theoretical model of exploration it still remains one of the most amazing tools of brain exploration thus far .</tokentext>
<sentencetext>fMRI is one of many imaging techniques which continue to evolve and give us more and more amazing data about the brain.
But its just data, and subject to the limitations of its tech.
Any neuroscientist worth their salt knows the limitations of their technology.
The problem desribed in the article is something people have been dicussing as a valid methodological criticism of some studies, not all fMRI data.
The summary is misleading, basically its saying a tool used incorrectly results in bad data.
Duh.
 
fMRI has great spatial but bad temporal resolution
EEG and MEG and great temporal resolution but bad spatial
PET has amazing metabolic resoultion but fuzzy temporal and spatial resolution 

These are a few examples.
The media and bad researchers can't get past the pretty visualization but fit into a proper theoretical model of exploration it still remains one of the most amazing tools of brain exploration thus far.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28623799</id>
	<title>The data is a useful starting point..</title>
	<author>wanax</author>
	<datestamp>1247073540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I think most people in the neuroscience community are aware of the limits of current fMRI approaches. The general linear model, which is used to compute blood flow, is rightly under considerable attack from a number of directions (it assumes, among other things, that all measured hemodynamic response is the result of changes in underlying neural activity, and there is now quite a bit of evidence that this is not the case). And the basic paradigm for most fMRI experiments, especially ones examining 'higher cognition' and emotion is the deeply limited subtractive inference (which was once described to me as "You show somebody a picture of their ass, and you show them a picture of a hole in the ground, subtract the two responses, and claim you've figured out the area responsible for discerning your asshole from a hole in the ground). The 'double dipping' described in the article is actually quite a minor concern compared with the above, although certainly a real one.</p><p>But the real value of fMRI, regardless of the deeply flawed current methodologies, is that it does give us a very good idea what areas of the brain we should be looking at with other experimental techniques, such as various types of electrophysiology, anatomical tracing, inactivation etc... And the good news is that there are fMRI methods being developed which are much more robust and will be able to tell us a great deal more about what's going on in the entire brain over various tasks, such as multispectral MRI and attempts to use stronger magnets to directly measure currents, rather than blood flow. So while I certainly agree that there is an 'Oooh! Shiny!' element to a lot of current fMRI research on higher cognition, and one should be deeply skeptical about many of the assertions made on the basis of such data, that doesn't mean fMRI is not an incredibly useful research method, and is likely to become even more so.</p></htmltext>
<tokenext>I think most people in the neuroscience community are aware of the limits of current fMRI approaches .
The general linear model , which is used to compute blood flow , is rightly under considerable attack from a number of directions ( it assumes , among other things , that all measured hemodynamic response is the result of changes in underlying neural activity , and there is now quite a bit of evidence that this is not the case ) .
And the basic paradigm for most fMRI experiments , especially ones examining 'higher cognition ' and emotion is the deeply limited subtractive inference ( which was once described to me as " You show somebody a picture of their ass , and you show them a picture of a hole in the ground , subtract the two responses , and claim you 've figured out the area responsible for discerning your asshole from a hole in the ground ) .
The 'double dipping ' described in the article is actually quite a minor concern compared with the above , although certainly a real one.But the real value of fMRI , regardless of the deeply flawed current methodologies , is that it does give us a very good idea what areas of the brain we should be looking at with other experimental techniques , such as various types of electrophysiology , anatomical tracing , inactivation etc... And the good news is that there are fMRI methods being developed which are much more robust and will be able to tell us a great deal more about what 's going on in the entire brain over various tasks , such as multispectral MRI and attempts to use stronger magnets to directly measure currents , rather than blood flow .
So while I certainly agree that there is an 'Oooh !
Shiny ! ' element to a lot of current fMRI research on higher cognition , and one should be deeply skeptical about many of the assertions made on the basis of such data , that does n't mean fMRI is not an incredibly useful research method , and is likely to become even more so .</tokentext>
<sentencetext>I think most people in the neuroscience community are aware of the limits of current fMRI approaches.
The general linear model, which is used to compute blood flow, is rightly under considerable attack from a number of directions (it assumes, among other things, that all measured hemodynamic response is the result of changes in underlying neural activity, and there is now quite a bit of evidence that this is not the case).
And the basic paradigm for most fMRI experiments, especially ones examining 'higher cognition' and emotion is the deeply limited subtractive inference (which was once described to me as "You show somebody a picture of their ass, and you show them a picture of a hole in the ground, subtract the two responses, and claim you've figured out the area responsible for discerning your asshole from a hole in the ground).
The 'double dipping' described in the article is actually quite a minor concern compared with the above, although certainly a real one.But the real value of fMRI, regardless of the deeply flawed current methodologies, is that it does give us a very good idea what areas of the brain we should be looking at with other experimental techniques, such as various types of electrophysiology, anatomical tracing, inactivation etc... And the good news is that there are fMRI methods being developed which are much more robust and will be able to tell us a great deal more about what's going on in the entire brain over various tasks, such as multispectral MRI and attempts to use stronger magnets to directly measure currents, rather than blood flow.
So while I certainly agree that there is an 'Oooh!
Shiny!' element to a lot of current fMRI research on higher cognition, and one should be deeply skeptical about many of the assertions made on the basis of such data, that doesn't mean fMRI is not an incredibly useful research method, and is likely to become even more so.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28625785</id>
	<title>Re:Well, what is the noise level?</title>
	<author>Hatta</author>
	<datestamp>1247080380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It's not the noise that's the problem.  It's the control.  Ideally you'd have a blank slate as a control, then do an activity, and measure the difference.  The problem is, the brain is always doing something, so it's really hard to make a controlled experiment.  It doesn't matter how tight your error bars are if you're not comparing your data to a valid, standard, control.</p></htmltext>
<tokenext>It 's not the noise that 's the problem .
It 's the control .
Ideally you 'd have a blank slate as a control , then do an activity , and measure the difference .
The problem is , the brain is always doing something , so it 's really hard to make a controlled experiment .
It does n't matter how tight your error bars are if you 're not comparing your data to a valid , standard , control .</tokentext>
<sentencetext>It's not the noise that's the problem.
It's the control.
Ideally you'd have a blank slate as a control, then do an activity, and measure the difference.
The problem is, the brain is always doing something, so it's really hard to make a controlled experiment.
It doesn't matter how tight your error bars are if you're not comparing your data to a valid, standard, control.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28623399</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28623767</id>
	<title>I actually DO fMRI research</title>
	<author>AtomicDevice</author>
	<datestamp>1247073420000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext>I will say, it would be easy to make wild claims about what areas of the brain "do" things just by looking at a scan and showing a pretty picture.
<br> <br>
That said, consider these things:<br>
While non-peer-reviewed publications often publish exciting results, the scientific community typically does not accept brain regions without corroboration from many different studies with different stimuli, often including monkey studies where real electrodes and not just low-res fMRI can be used<br>
It is difficult to get the numbers of subject that would be considered standard in other studies for fMRI studies.  First off you actually need subjects who will do the assigned task, then you need them to do it perfectly still, for anywhere between 20 minutes to several hours (usually in no more than 1-hour segments).  So the likelihood that just one study could prove something is quite small.<br>
In many (perhaps most) studies, all the subjects brains are averaged together for data analysis, there are several different ways of doing this, none of them particularly accurate.  This again calls attention to the need for multiple studies<br> <br>
It's also important to actually know what you're looking at when you see pictures of "brain activity", usually you are looking at the averaged activity of many subjects, after it has been run through (most likely) some form of general linear model or event-related analysis.  Both of these methods estimate and fit a hemodynamic response function (the pattern of brain response to a stimulus), and what you're actually looking at is the fit or perhaps t-values (roughly fit/std. deviation) for each voxel.<br> <br>Also note, that for almost any study, I could pick some random brain areas that are "lighting up" and claim a response, but they would almost certainly be shot down with more subjects, another study, etc.<br> <br> bottom line, responsible investigators can make good sense out of fMRI data, but doing one experiment and claiming you "found the love [or insert whatever emotion/though] center is irresponsible and should be correlated with other studies and hopefully monkey studies as well.</htmltext>
<tokenext>I will say , it would be easy to make wild claims about what areas of the brain " do " things just by looking at a scan and showing a pretty picture .
That said , consider these things : While non-peer-reviewed publications often publish exciting results , the scientific community typically does not accept brain regions without corroboration from many different studies with different stimuli , often including monkey studies where real electrodes and not just low-res fMRI can be used It is difficult to get the numbers of subject that would be considered standard in other studies for fMRI studies .
First off you actually need subjects who will do the assigned task , then you need them to do it perfectly still , for anywhere between 20 minutes to several hours ( usually in no more than 1-hour segments ) .
So the likelihood that just one study could prove something is quite small .
In many ( perhaps most ) studies , all the subjects brains are averaged together for data analysis , there are several different ways of doing this , none of them particularly accurate .
This again calls attention to the need for multiple studies It 's also important to actually know what you 're looking at when you see pictures of " brain activity " , usually you are looking at the averaged activity of many subjects , after it has been run through ( most likely ) some form of general linear model or event-related analysis .
Both of these methods estimate and fit a hemodynamic response function ( the pattern of brain response to a stimulus ) , and what you 're actually looking at is the fit or perhaps t-values ( roughly fit/std .
deviation ) for each voxel .
Also note , that for almost any study , I could pick some random brain areas that are " lighting up " and claim a response , but they would almost certainly be shot down with more subjects , another study , etc .
bottom line , responsible investigators can make good sense out of fMRI data , but doing one experiment and claiming you " found the love [ or insert whatever emotion/though ] center is irresponsible and should be correlated with other studies and hopefully monkey studies as well .</tokentext>
<sentencetext>I will say, it would be easy to make wild claims about what areas of the brain "do" things just by looking at a scan and showing a pretty picture.
That said, consider these things:
While non-peer-reviewed publications often publish exciting results, the scientific community typically does not accept brain regions without corroboration from many different studies with different stimuli, often including monkey studies where real electrodes and not just low-res fMRI can be used
It is difficult to get the numbers of subject that would be considered standard in other studies for fMRI studies.
First off you actually need subjects who will do the assigned task, then you need them to do it perfectly still, for anywhere between 20 minutes to several hours (usually in no more than 1-hour segments).
So the likelihood that just one study could prove something is quite small.
In many (perhaps most) studies, all the subjects brains are averaged together for data analysis, there are several different ways of doing this, none of them particularly accurate.
This again calls attention to the need for multiple studies 
It's also important to actually know what you're looking at when you see pictures of "brain activity", usually you are looking at the averaged activity of many subjects, after it has been run through (most likely) some form of general linear model or event-related analysis.
Both of these methods estimate and fit a hemodynamic response function (the pattern of brain response to a stimulus), and what you're actually looking at is the fit or perhaps t-values (roughly fit/std.
deviation) for each voxel.
Also note, that for almost any study, I could pick some random brain areas that are "lighting up" and claim a response, but they would almost certainly be shot down with more subjects, another study, etc.
bottom line, responsible investigators can make good sense out of fMRI data, but doing one experiment and claiming you "found the love [or insert whatever emotion/though] center is irresponsible and should be correlated with other studies and hopefully monkey studies as well.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28624107</id>
	<title>Old news...</title>
	<author>Anonymous</author>
	<datestamp>1247074740000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>These issues were explored several years ago in a book by Joseph Dumit, \_Picturing Personhood\_, which covers both the problems in constructing these images, and the way that these images are taken up as facts by courts, the news, etc.</p></htmltext>
<tokenext>These issues were explored several years ago in a book by Joseph Dumit , \ _Picturing Personhood \ _ , which covers both the problems in constructing these images , and the way that these images are taken up as facts by courts , the news , etc .</tokentext>
<sentencetext>These issues were explored several years ago in a book by Joseph Dumit, \_Picturing Personhood\_, which covers both the problems in constructing these images, and the way that these images are taken up as facts by courts, the news, etc.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28625149</id>
	<title>Correlation is not Causation?</title>
	<author>Anonymous</author>
	<datestamp>1247078160000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext>Whoever tagged this story 'correlationisnotcausation' is a fucking idiot. You're not as smart as you think you are.</htmltext>
<tokenext>Whoever tagged this story 'correlationisnotcausation ' is a fucking idiot .
You 're not as smart as you think you are .</tokentext>
<sentencetext>Whoever tagged this story 'correlationisnotcausation' is a fucking idiot.
You're not as smart as you think you are.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28636295</id>
	<title>Response paper to Voodoo Correlations</title>
	<author>daenris</author>
	<datestamp>1247153340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Anyone who is actually interested may want to check out one response paper to the Vul Voodoo Correlations paper which points out a number of problems that Vul himself has in his analysis.
<a href="http://www.scn.ucla.edu/pdf/LiebermanBerkmanWager(invitedreply).pdf" title="ucla.edu">http://www.scn.ucla.edu/pdf/LiebermanBerkmanWager(invitedreply).pdf</a> [ucla.edu]</htmltext>
<tokenext>Anyone who is actually interested may want to check out one response paper to the Vul Voodoo Correlations paper which points out a number of problems that Vul himself has in his analysis .
http : //www.scn.ucla.edu/pdf/LiebermanBerkmanWager ( invitedreply ) .pdf [ ucla.edu ]</tokentext>
<sentencetext>Anyone who is actually interested may want to check out one response paper to the Vul Voodoo Correlations paper which points out a number of problems that Vul himself has in his analysis.
http://www.scn.ucla.edu/pdf/LiebermanBerkmanWager(invitedreply).pdf [ucla.edu]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28623241</id>
	<title>I have seen this before...</title>
	<author>Anonymous</author>
	<datestamp>1247071440000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>"Flaws in the statistical methods used by researchers can result in false images with a variety of inaccuracies. Yet the images produced are so vivid and engaging that even other neuroscientists can be misled by them."</p><p>Looks just like Climate Science then...</p></htmltext>
<tokenext>" Flaws in the statistical methods used by researchers can result in false images with a variety of inaccuracies .
Yet the images produced are so vivid and engaging that even other neuroscientists can be misled by them .
" Looks just like Climate Science then.. .</tokentext>
<sentencetext>"Flaws in the statistical methods used by researchers can result in false images with a variety of inaccuracies.
Yet the images produced are so vivid and engaging that even other neuroscientists can be misled by them.
"Looks just like Climate Science then...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28624723</id>
	<title>"These images are not snapshots"?  No kidding.</title>
	<author>jeffb (2.718)</author>
	<datestamp>1247076720000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>I've been lucky enough to work with MR and CT imaging researchers for a while now.  One of the benefits of this job is that I've gotten to learn a lot about how these images are acquired and reconstructed.  It's not quite as bad as making sausage, but it's a lot more involved than a "snapshot".</p><p>For CT, we acquire a bunch of 2D images through you from different angles, then do a lot of number crunching to generate a 3D volume.  The problem is that you don't hold still while we're doing it.  You can try; you can even hold your breath, but you can't "hold your heart".  As your organs move between views, we get <i>motion artifacts</i> -- shape distortion, bright or dark areas, even "things" that aren't really there.</p><p>For MR, it's even worse.  I can barely tread water in the physics of it, but we're effectively capturing a line at a time in 3D space.  (We're actually acquiring data in "k-space", then running it through a Fourier transform to make it spatial.)  Not only is it subject to motion artifacts, it's also subject to <i>susceptibility artifacts</i> (distortions because of the magnetic properties of certain materials), <i>flow artifacts</i> (blood moves through vessels between the time that we apply a magnetic pulse and the time that we read back emitted signals), and lots of other things.</p><p>fMRI is just adding yet another layer of aggregation and interpretation on top of all this.  Sure, it's a "visualization of data generated by repeated scans", but so is <i>every</i> CT or MRI image.</p><p>3D imaging, especially MRI, is hideously complicated and indirect.  It's almost inconceivable that it could yield results with any physical significance.</p><p>...and yet, it does.  It's become so routine, so reliable, so well-understood and well-controlled, that doctors and researchers know they can rely on it as a matter of course.  They still have to be aware of the errors and distortions that can arise, but that's true of every imaging or monitoring system, all the way down to the stethoscope and the fever thermometer.</p></htmltext>
<tokenext>I 've been lucky enough to work with MR and CT imaging researchers for a while now .
One of the benefits of this job is that I 've gotten to learn a lot about how these images are acquired and reconstructed .
It 's not quite as bad as making sausage , but it 's a lot more involved than a " snapshot " .For CT , we acquire a bunch of 2D images through you from different angles , then do a lot of number crunching to generate a 3D volume .
The problem is that you do n't hold still while we 're doing it .
You can try ; you can even hold your breath , but you ca n't " hold your heart " .
As your organs move between views , we get motion artifacts -- shape distortion , bright or dark areas , even " things " that are n't really there.For MR , it 's even worse .
I can barely tread water in the physics of it , but we 're effectively capturing a line at a time in 3D space .
( We 're actually acquiring data in " k-space " , then running it through a Fourier transform to make it spatial .
) Not only is it subject to motion artifacts , it 's also subject to susceptibility artifacts ( distortions because of the magnetic properties of certain materials ) , flow artifacts ( blood moves through vessels between the time that we apply a magnetic pulse and the time that we read back emitted signals ) , and lots of other things.fMRI is just adding yet another layer of aggregation and interpretation on top of all this .
Sure , it 's a " visualization of data generated by repeated scans " , but so is every CT or MRI image.3D imaging , especially MRI , is hideously complicated and indirect .
It 's almost inconceivable that it could yield results with any physical significance....and yet , it does .
It 's become so routine , so reliable , so well-understood and well-controlled , that doctors and researchers know they can rely on it as a matter of course .
They still have to be aware of the errors and distortions that can arise , but that 's true of every imaging or monitoring system , all the way down to the stethoscope and the fever thermometer .</tokentext>
<sentencetext>I've been lucky enough to work with MR and CT imaging researchers for a while now.
One of the benefits of this job is that I've gotten to learn a lot about how these images are acquired and reconstructed.
It's not quite as bad as making sausage, but it's a lot more involved than a "snapshot".For CT, we acquire a bunch of 2D images through you from different angles, then do a lot of number crunching to generate a 3D volume.
The problem is that you don't hold still while we're doing it.
You can try; you can even hold your breath, but you can't "hold your heart".
As your organs move between views, we get motion artifacts -- shape distortion, bright or dark areas, even "things" that aren't really there.For MR, it's even worse.
I can barely tread water in the physics of it, but we're effectively capturing a line at a time in 3D space.
(We're actually acquiring data in "k-space", then running it through a Fourier transform to make it spatial.
)  Not only is it subject to motion artifacts, it's also subject to susceptibility artifacts (distortions because of the magnetic properties of certain materials), flow artifacts (blood moves through vessels between the time that we apply a magnetic pulse and the time that we read back emitted signals), and lots of other things.fMRI is just adding yet another layer of aggregation and interpretation on top of all this.
Sure, it's a "visualization of data generated by repeated scans", but so is every CT or MRI image.3D imaging, especially MRI, is hideously complicated and indirect.
It's almost inconceivable that it could yield results with any physical significance....and yet, it does.
It's become so routine, so reliable, so well-understood and well-controlled, that doctors and researchers know they can rely on it as a matter of course.
They still have to be aware of the errors and distortions that can arise, but that's true of every imaging or monitoring system, all the way down to the stethoscope and the fever thermometer.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28629709</id>
	<title>Contrasts and Multiple Subjects</title>
	<author>bmacs27</author>
	<datestamp>1247054820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Disclaimer:  I'm a neuroscientist/psychophysicist that studies vision.  Traditionally, those that study vision or motor control have the lowest tolerance for squishiness.  This is because what attracts us to the field is the fact that we can correlate human behavior with objective measurements such as joint angles, eye movements, and luminance.
<br> <br>
fMRI, like all scientific tools comes with its caveats.  First, as has been mentioned, it isn't measuring current at all, but rather oxygenation of blood.  I disagree with the previous poster however that claims this has not been demonstrated to correlate with spiking behavior.  That relationship has been shown, however there is a variable lag between spiking activity and the BOLD response.
<br> <br>
Second, as you might imagine, just about your entire brain is involved with any task tested in the magnet.  To get around this they use the concept of "contrasts".  In other words, the subject performs two tasks: one baseline task which is designed to involve every part of the brain in the test condition, other than the psychological process of interest.  For instance, if one were interested in speech areas you might have the subject report the category of object presented in an image.  The baseline task, in that case, would be to passively view the same images.  This allows you to subtract the activity, revealing areas that are more or less active in the test condition than during the baseline task.  This is all well and good in the sorts of task my colleagues do, as it is fairly straightforward what the brain is doing when a light is flickering in a particular part of the visual field.  When one is more interested in complicated social behavior, or emotional regulation, it is much more difficult and all the more necessary to careful validate your choice of conditions to contrast.
<br> <br>
Third, as was mentioned earlier, for statistical or logistical reasons often the data must be averaged across subjects.  IMHO this is BAD, like very bad.  In vision typically we analyze the data independently for each subject, map their individual brains, and report both within subject and collapsed across subject data for activity in area X.  In fields such as social neuroscience it is often the standard to effectively blur and distort the data to fit a "brain template" as the individual differences between convolutions in the brain are enormous.  They are then in a position to average the contrasts for each of the subjects together in order to get an area that becomes significant.  My largest problem with such a technique is that quite often they get a maximally significant between subjects area that is hardly at all active in any individual subject.  To me, that screams "you're doing it wrong."
<br> <br>
But, like I said, those of us in vision are often thought of hard scientists, but with unrealistic expectations of the rigor that should be employed in the rest of the brain sciences.  It's a complicated problem, and to be fair many of these questions can't be answered in any other way.  Vision and motor control are lucky that animal models such as the rhesus macaque are extremely similar to humans within those areas.  The same can not be said for frontal cortex, for instance.  We've also got about a 100 year head start on most of the rest of neuroscience.</htmltext>
<tokenext>Disclaimer : I 'm a neuroscientist/psychophysicist that studies vision .
Traditionally , those that study vision or motor control have the lowest tolerance for squishiness .
This is because what attracts us to the field is the fact that we can correlate human behavior with objective measurements such as joint angles , eye movements , and luminance .
fMRI , like all scientific tools comes with its caveats .
First , as has been mentioned , it is n't measuring current at all , but rather oxygenation of blood .
I disagree with the previous poster however that claims this has not been demonstrated to correlate with spiking behavior .
That relationship has been shown , however there is a variable lag between spiking activity and the BOLD response .
Second , as you might imagine , just about your entire brain is involved with any task tested in the magnet .
To get around this they use the concept of " contrasts " .
In other words , the subject performs two tasks : one baseline task which is designed to involve every part of the brain in the test condition , other than the psychological process of interest .
For instance , if one were interested in speech areas you might have the subject report the category of object presented in an image .
The baseline task , in that case , would be to passively view the same images .
This allows you to subtract the activity , revealing areas that are more or less active in the test condition than during the baseline task .
This is all well and good in the sorts of task my colleagues do , as it is fairly straightforward what the brain is doing when a light is flickering in a particular part of the visual field .
When one is more interested in complicated social behavior , or emotional regulation , it is much more difficult and all the more necessary to careful validate your choice of conditions to contrast .
Third , as was mentioned earlier , for statistical or logistical reasons often the data must be averaged across subjects .
IMHO this is BAD , like very bad .
In vision typically we analyze the data independently for each subject , map their individual brains , and report both within subject and collapsed across subject data for activity in area X. In fields such as social neuroscience it is often the standard to effectively blur and distort the data to fit a " brain template " as the individual differences between convolutions in the brain are enormous .
They are then in a position to average the contrasts for each of the subjects together in order to get an area that becomes significant .
My largest problem with such a technique is that quite often they get a maximally significant between subjects area that is hardly at all active in any individual subject .
To me , that screams " you 're doing it wrong .
" But , like I said , those of us in vision are often thought of hard scientists , but with unrealistic expectations of the rigor that should be employed in the rest of the brain sciences .
It 's a complicated problem , and to be fair many of these questions ca n't be answered in any other way .
Vision and motor control are lucky that animal models such as the rhesus macaque are extremely similar to humans within those areas .
The same can not be said for frontal cortex , for instance .
We 've also got about a 100 year head start on most of the rest of neuroscience .</tokentext>
<sentencetext>Disclaimer:  I'm a neuroscientist/psychophysicist that studies vision.
Traditionally, those that study vision or motor control have the lowest tolerance for squishiness.
This is because what attracts us to the field is the fact that we can correlate human behavior with objective measurements such as joint angles, eye movements, and luminance.
fMRI, like all scientific tools comes with its caveats.
First, as has been mentioned, it isn't measuring current at all, but rather oxygenation of blood.
I disagree with the previous poster however that claims this has not been demonstrated to correlate with spiking behavior.
That relationship has been shown, however there is a variable lag between spiking activity and the BOLD response.
Second, as you might imagine, just about your entire brain is involved with any task tested in the magnet.
To get around this they use the concept of "contrasts".
In other words, the subject performs two tasks: one baseline task which is designed to involve every part of the brain in the test condition, other than the psychological process of interest.
For instance, if one were interested in speech areas you might have the subject report the category of object presented in an image.
The baseline task, in that case, would be to passively view the same images.
This allows you to subtract the activity, revealing areas that are more or less active in the test condition than during the baseline task.
This is all well and good in the sorts of task my colleagues do, as it is fairly straightforward what the brain is doing when a light is flickering in a particular part of the visual field.
When one is more interested in complicated social behavior, or emotional regulation, it is much more difficult and all the more necessary to careful validate your choice of conditions to contrast.
Third, as was mentioned earlier, for statistical or logistical reasons often the data must be averaged across subjects.
IMHO this is BAD, like very bad.
In vision typically we analyze the data independently for each subject, map their individual brains, and report both within subject and collapsed across subject data for activity in area X.  In fields such as social neuroscience it is often the standard to effectively blur and distort the data to fit a "brain template" as the individual differences between convolutions in the brain are enormous.
They are then in a position to average the contrasts for each of the subjects together in order to get an area that becomes significant.
My largest problem with such a technique is that quite often they get a maximally significant between subjects area that is hardly at all active in any individual subject.
To me, that screams "you're doing it wrong.
"
 
But, like I said, those of us in vision are often thought of hard scientists, but with unrealistic expectations of the rigor that should be employed in the rest of the brain sciences.
It's a complicated problem, and to be fair many of these questions can't be answered in any other way.
Vision and motor control are lucky that animal models such as the rhesus macaque are extremely similar to humans within those areas.
The same can not be said for frontal cortex, for instance.
We've also got about a 100 year head start on most of the rest of neuroscience.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28623291</id>
	<title>MAGNETS!</title>
	<author>Anonymous</author>
	<datestamp>1247071620000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><p>I for one welcome our new magnetic overlords.</p></htmltext>
<tokenext>I for one welcome our new magnetic overlords .</tokentext>
<sentencetext>I for one welcome our new magnetic overlords.</sentencetext>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_08_1526230_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28626973
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28623109
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_08_1526230_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28623635
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28623229
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_08_1526230_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28627223
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28624723
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_08_1526230_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28625785
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28623399
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_08_1526230_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28635377
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28624723
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_08_1526230_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28635931
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28624723
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_08_1526230.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28625149
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_08_1526230.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28624723
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28627223
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28635377
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28635931
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_08_1526230.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28624173
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_08_1526230.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28623223
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_08_1526230.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28628301
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_08_1526230.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28623399
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28625785
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_08_1526230.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28623109
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28626973
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_08_1526230.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28623229
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28623635
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_08_1526230.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_08_1526230.28623767
</commentlist>
</conversation>
