<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article10_03_02_0242224</id>
	<title>Recovering Data From Noise</title>
	<author>kdawson</author>
	<datestamp>1267535640000</datestamp>
	<htmltext>An anonymous reader tips an account up at Wired of a hot new field of mathematics and applied algorithm research called "compressed sensing" that takes advantage of the mathematical concept of <em>sparsity</em> to <a href="http://www.wired.com/magazine/2010/02/ff\_algorithm/all/1">recreate images or other datasets from noisy, incomplete inputs</a>. <i>"[The inventor of CS, Emmanuel] Cand&egrave;s can envision a long list of applications based on what he and his colleagues have accomplished. He sees, for example, a future in which the technique is used in more than MRI machines. Digital cameras, he explains, gather huge amounts of information and then compress the images. But compression, at least if CS is available, is a gigantic waste. If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress, why not just save battery power and memory and record 90 percent less data in the first place? ... The ability to gather meaningful data from tiny samples of information is also enticing to the military."</i></htmltext>
<tokenext>An anonymous reader tips an account up at Wired of a hot new field of mathematics and applied algorithm research called " compressed sensing " that takes advantage of the mathematical concept of sparsity to recreate images or other datasets from noisy , incomplete inputs .
" [ The inventor of CS , Emmanuel ] Cand   s can envision a long list of applications based on what he and his colleagues have accomplished .
He sees , for example , a future in which the technique is used in more than MRI machines .
Digital cameras , he explains , gather huge amounts of information and then compress the images .
But compression , at least if CS is available , is a gigantic waste .
If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress , why not just save battery power and memory and record 90 percent less data in the first place ?
... The ability to gather meaningful data from tiny samples of information is also enticing to the military .
"</tokentext>
<sentencetext>An anonymous reader tips an account up at Wired of a hot new field of mathematics and applied algorithm research called "compressed sensing" that takes advantage of the mathematical concept of sparsity to recreate images or other datasets from noisy, incomplete inputs.
"[The inventor of CS, Emmanuel] Candès can envision a long list of applications based on what he and his colleagues have accomplished.
He sees, for example, a future in which the technique is used in more than MRI machines.
Digital cameras, he explains, gather huge amounts of information and then compress the images.
But compression, at least if CS is available, is a gigantic waste.
If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress, why not just save battery power and memory and record 90 percent less data in the first place?
... The ability to gather meaningful data from tiny samples of information is also enticing to the military.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329510</id>
	<title>Re:I am a bit worried about the "fill in the shape</title>
	<author>ZeroSumHappiness</author>
	<datestamp>1267543740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Why in the world would you use this in a medical image? That seems like quite the straw man.</htmltext>
<tokenext>Why in the world would you use this in a medical image ?
That seems like quite the straw man .</tokentext>
<sentencetext>Why in the world would you use this in a medical image?
That seems like quite the straw man.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328826</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329606</id>
	<title>Portal</title>
	<author>Anonymous</author>
	<datestamp>1267544220000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Just in time to help decipher Valve's latest update...</p><p>http://www.rockpapershotgun.com/2010/03/02/portal-theres-something-going-on/</p></htmltext>
<tokenext>Just in time to help decipher Valve 's latest update...http : //www.rockpapershotgun.com/2010/03/02/portal-theres-something-going-on/</tokentext>
<sentencetext>Just in time to help decipher Valve's latest update...http://www.rockpapershotgun.com/2010/03/02/portal-theres-something-going-on/</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329884</id>
	<title>Re:Come again?</title>
	<author>rnturn</author>
	<datestamp>1267545480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><blockquote><div><p>If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress, why not just save battery power and memory and record 90 percent less data in the first place?<nobr> <wbr></nobr>..</p></div></blockquote><p>

That's what a digital camera is about, isn't it?</p></div></blockquote><p>Perhaps if you're using some low-end digital camera but not if your camera allows you to save images in RAW format.  Sort of like it was in the days you might have spent in the darkroom: if it ain't on the negative you're not going to get it back in the darkroom. Why throw information away before even viewing it?  The only reason to compress images (IMHO) is if you're going to put them up on a web site or transmit them via email. Yeah, compressed images allow you to save more on the memory card but memory card prices are such that you can throw a much bigger card than the one that shipped with the camera and shoot all day long. (I have an older camera that only takes up to 4GB cards and I still haven't been able to fill it up in less than a day.)

</p><p>I guess I don't see the advantage to throwing away imagery information and praying that a mathematical algorithm <i>might</i> be able to get it back.</p></div>
	</htmltext>
<tokenext>If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress , why not just save battery power and memory and record 90 percent less data in the first place ?
. . That 's what a digital camera is about , is n't it ? Perhaps if you 're using some low-end digital camera but not if your camera allows you to save images in RAW format .
Sort of like it was in the days you might have spent in the darkroom : if it ai n't on the negative you 're not going to get it back in the darkroom .
Why throw information away before even viewing it ?
The only reason to compress images ( IMHO ) is if you 're going to put them up on a web site or transmit them via email .
Yeah , compressed images allow you to save more on the memory card but memory card prices are such that you can throw a much bigger card than the one that shipped with the camera and shoot all day long .
( I have an older camera that only takes up to 4GB cards and I still have n't been able to fill it up in less than a day .
) I guess I do n't see the advantage to throwing away imagery information and praying that a mathematical algorithm might be able to get it back .</tokentext>
<sentencetext>If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress, why not just save battery power and memory and record 90 percent less data in the first place?
..

That's what a digital camera is about, isn't it?Perhaps if you're using some low-end digital camera but not if your camera allows you to save images in RAW format.
Sort of like it was in the days you might have spent in the darkroom: if it ain't on the negative you're not going to get it back in the darkroom.
Why throw information away before even viewing it?
The only reason to compress images (IMHO) is if you're going to put them up on a web site or transmit them via email.
Yeah, compressed images allow you to save more on the memory card but memory card prices are such that you can throw a much bigger card than the one that shipped with the camera and shoot all day long.
(I have an older camera that only takes up to 4GB cards and I still haven't been able to fill it up in less than a day.
)

I guess I don't see the advantage to throwing away imagery information and praying that a mathematical algorithm might be able to get it back.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328824</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31332566</id>
	<title>SETI</title>
	<author>Anonymous</author>
	<datestamp>1267556400000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Any potential application of this in the SETI program?</p></htmltext>
<tokenext>Any potential application of this in the SETI program ?</tokentext>
<sentencetext>Any potential application of this in the SETI program?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329628</id>
	<title>Re:Why not...</title>
	<author>Matje</author>
	<datestamp>1267544280000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>RTFA that's the point of the algorithm: the camera sensors don't need to calculate what is interesting about the picture, they just need to sample a randomly distributed set of pixels. The algorithm calculates the highres image from that sample.</p><p>The idea behind the algorithm is really very elegant. To parafrase their approach: imagine a 1000x1000 pixel image with 24 bit color. There are 24 ^ 1000000 unique pixel configurations to fill that image. The vast majority of those configuration will look like noise. In real life you generally take pictures of non-noise things, like portraits etc. You might define a non-noise image as one where knowing the actual value of a given pixel allows a probability of predicting the value of a neighboring pixel that is greater than chance. A noisy image is one where knowing a given pixel value gives you no information about neighboring pixels at all.</p><p>The algorithm provides a way to distinguish between image configurations that depict random noise and those that depict something non-random. Since, apparently, the ratio of non-random image configurations is so small compared to the noisy image configurations, you need only a couple of hints to figure out which of the non-random image configurations you need. What the algoritm does is take a random sample of a non-random image (10\% of the original pixels), and calculates a non-random image configuration that fits the given sample. Even though in theory you might end up with Madonna from a picture of E-T, in practice you don't (and I believe they claim they can prove that the chance of accidentally ending up with Madonna is extremely small).</p><p>It's all about entropy really.</p></htmltext>
<tokenext>RTFA that 's the point of the algorithm : the camera sensors do n't need to calculate what is interesting about the picture , they just need to sample a randomly distributed set of pixels .
The algorithm calculates the highres image from that sample.The idea behind the algorithm is really very elegant .
To parafrase their approach : imagine a 1000x1000 pixel image with 24 bit color .
There are 24 ^ 1000000 unique pixel configurations to fill that image .
The vast majority of those configuration will look like noise .
In real life you generally take pictures of non-noise things , like portraits etc .
You might define a non-noise image as one where knowing the actual value of a given pixel allows a probability of predicting the value of a neighboring pixel that is greater than chance .
A noisy image is one where knowing a given pixel value gives you no information about neighboring pixels at all.The algorithm provides a way to distinguish between image configurations that depict random noise and those that depict something non-random .
Since , apparently , the ratio of non-random image configurations is so small compared to the noisy image configurations , you need only a couple of hints to figure out which of the non-random image configurations you need .
What the algoritm does is take a random sample of a non-random image ( 10 \ % of the original pixels ) , and calculates a non-random image configuration that fits the given sample .
Even though in theory you might end up with Madonna from a picture of E-T , in practice you do n't ( and I believe they claim they can prove that the chance of accidentally ending up with Madonna is extremely small ) .It 's all about entropy really .</tokentext>
<sentencetext>RTFA that's the point of the algorithm: the camera sensors don't need to calculate what is interesting about the picture, they just need to sample a randomly distributed set of pixels.
The algorithm calculates the highres image from that sample.The idea behind the algorithm is really very elegant.
To parafrase their approach: imagine a 1000x1000 pixel image with 24 bit color.
There are 24 ^ 1000000 unique pixel configurations to fill that image.
The vast majority of those configuration will look like noise.
In real life you generally take pictures of non-noise things, like portraits etc.
You might define a non-noise image as one where knowing the actual value of a given pixel allows a probability of predicting the value of a neighboring pixel that is greater than chance.
A noisy image is one where knowing a given pixel value gives you no information about neighboring pixels at all.The algorithm provides a way to distinguish between image configurations that depict random noise and those that depict something non-random.
Since, apparently, the ratio of non-random image configurations is so small compared to the noisy image configurations, you need only a couple of hints to figure out which of the non-random image configurations you need.
What the algoritm does is take a random sample of a non-random image (10\% of the original pixels), and calculates a non-random image configuration that fits the given sample.
Even though in theory you might end up with Madonna from a picture of E-T, in practice you don't (and I believe they claim they can prove that the chance of accidentally ending up with Madonna is extremely small).It's all about entropy really.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328778</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329188</id>
	<title>Deckard</title>
	<author>Anonymous</author>
	<datestamp>1267542120000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext>Enhance 34 to 36. Pan right and pull back. Stop. Enhance 34 to 46. Give me a hard copy right there.</htmltext>
<tokenext>Enhance 34 to 36 .
Pan right and pull back .
Stop. Enhance 34 to 46 .
Give me a hard copy right there .</tokentext>
<sentencetext>Enhance 34 to 36.
Pan right and pull back.
Stop. Enhance 34 to 46.
Give me a hard copy right there.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328740</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31333040</id>
	<title>Quick!</title>
	<author>ThatsNotPudding</author>
	<datestamp>1267558200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>To the Zappruder film!<br>/jk</htmltext>
<tokenext>To the Zappruder film ! /jk</tokentext>
<sentencetext>To the Zappruder film!/jk</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329482</id>
	<title>Overview of Algorithm</title>
	<author>Chapter80</author>
	<datestamp>1267543620000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext><p>Here's how Compressed Sensing works with standard JPGs.</p><p>First the program takes the target JPG (which you want to be very large), and treats it as random noise. Simply a field of random zeros and ones. Then, within that vast field, the program selects a pattern or frequency to look for variations in the noise pattern.</p><p>The variations in the noise pattern act as a beacon - sort of a signal that the payload is coming. Common variations include mathematical pulses at predictable intervals - say something that would easily be recognizable by a 5th-grader, like say a pattern of prime numbers.</p><p>Then it searches for a second layer, nested within the main signal. Some bits are bits to tell how to interpret the other bits. Use a gray scale with standard interpolation. Rotate the second layer 90 degrees. Make sure there's a string break every 60 characters, and search for an auxiliary sideband channel. Make sure that the second layer is zoomed out sufficiently, and using a less popular protocol language; otherwise it won't be easily recognizable upon first glance.</p><p>Here's the magical part: It then finds a third layer. Sort of like in ancient times when parchment was in short supply people would write over old writing... it was called a palimpsest. Here you can uncompress over 10,000 "frames" of data, which can enhance a simple noise pattern to be a recognizable political figure.</p><p>Further details on this method can be found <a href="http://www.imsdb.com/Movie\%20Scripts/Contact\%20Script.html" title="imsdb.com">here.</a> [imsdb.com]</p><p>--<br>Recycle when possible!</p></htmltext>
<tokenext>Here 's how Compressed Sensing works with standard JPGs.First the program takes the target JPG ( which you want to be very large ) , and treats it as random noise .
Simply a field of random zeros and ones .
Then , within that vast field , the program selects a pattern or frequency to look for variations in the noise pattern.The variations in the noise pattern act as a beacon - sort of a signal that the payload is coming .
Common variations include mathematical pulses at predictable intervals - say something that would easily be recognizable by a 5th-grader , like say a pattern of prime numbers.Then it searches for a second layer , nested within the main signal .
Some bits are bits to tell how to interpret the other bits .
Use a gray scale with standard interpolation .
Rotate the second layer 90 degrees .
Make sure there 's a string break every 60 characters , and search for an auxiliary sideband channel .
Make sure that the second layer is zoomed out sufficiently , and using a less popular protocol language ; otherwise it wo n't be easily recognizable upon first glance.Here 's the magical part : It then finds a third layer .
Sort of like in ancient times when parchment was in short supply people would write over old writing... it was called a palimpsest .
Here you can uncompress over 10,000 " frames " of data , which can enhance a simple noise pattern to be a recognizable political figure.Further details on this method can be found here .
[ imsdb.com ] --Recycle when possible !</tokentext>
<sentencetext>Here's how Compressed Sensing works with standard JPGs.First the program takes the target JPG (which you want to be very large), and treats it as random noise.
Simply a field of random zeros and ones.
Then, within that vast field, the program selects a pattern or frequency to look for variations in the noise pattern.The variations in the noise pattern act as a beacon - sort of a signal that the payload is coming.
Common variations include mathematical pulses at predictable intervals - say something that would easily be recognizable by a 5th-grader, like say a pattern of prime numbers.Then it searches for a second layer, nested within the main signal.
Some bits are bits to tell how to interpret the other bits.
Use a gray scale with standard interpolation.
Rotate the second layer 90 degrees.
Make sure there's a string break every 60 characters, and search for an auxiliary sideband channel.
Make sure that the second layer is zoomed out sufficiently, and using a less popular protocol language; otherwise it won't be easily recognizable upon first glance.Here's the magical part: It then finds a third layer.
Sort of like in ancient times when parchment was in short supply people would write over old writing... it was called a palimpsest.
Here you can uncompress over 10,000 "frames" of data, which can enhance a simple noise pattern to be a recognizable political figure.Further details on this method can be found here.
[imsdb.com]--Recycle when possible!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328740</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31332190</id>
	<title>A correct interpretation</title>
	<author>eric.tramel</author>
	<datestamp>1267555260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>The<nobr> <wbr></nobr>/. headline and the Wired article do tend to misrepresent Compressed Sensing as some kind of noise-remover, despeckler, or image enhancer. This is simply not the case. In Compressed Sensing, we are intentionally sampling a signal in an incoherent domain so that each measurement evaluates the entire image globally. In other words, each sample has as much weight as any other, so when we hold on to fewer of them, we may obtain more information about the original signal than if we sub-sampled the signal in the original domain. When we reconstruct the original image from our compressed/sub-sampled measurements in an incoherent domain, we are trying to find the most sparse signal that matches the measurements we observed (solving an ill-posed inverse problem via constrained optimization). The signal sparsity can be thought of the orderedness or "structured-ness" of the signal. In other words, the most ordered image that matches our compressed measurements is correct solution with high degree of probability. For a technical primer, check out this paper ( <a href="http://dsp.rice.edu/sites/dsp.rice.edu/files/cs/CSintro.pdf" title="rice.edu" rel="nofollow">http://dsp.rice.edu/sites/dsp.rice.edu/files/cs/CSintro.pdf</a> [rice.edu] ).

<br> <br>
Okay, yes, that might be a little bit weighty if you aren't in the field, but I would suggest you check out Nuit Blanche ( <a href="http://nuit-blanche.blogspot.com/" title="blogspot.com" rel="nofollow">http://nuit-blanche.blogspot.com/</a> [blogspot.com] ) for a description of what exactly CS is, how it works, and what it is useful for. Today's article is especially interesting in this regard.</htmltext>
<tokenext>The / .
headline and the Wired article do tend to misrepresent Compressed Sensing as some kind of noise-remover , despeckler , or image enhancer .
This is simply not the case .
In Compressed Sensing , we are intentionally sampling a signal in an incoherent domain so that each measurement evaluates the entire image globally .
In other words , each sample has as much weight as any other , so when we hold on to fewer of them , we may obtain more information about the original signal than if we sub-sampled the signal in the original domain .
When we reconstruct the original image from our compressed/sub-sampled measurements in an incoherent domain , we are trying to find the most sparse signal that matches the measurements we observed ( solving an ill-posed inverse problem via constrained optimization ) .
The signal sparsity can be thought of the orderedness or " structured-ness " of the signal .
In other words , the most ordered image that matches our compressed measurements is correct solution with high degree of probability .
For a technical primer , check out this paper ( http : //dsp.rice.edu/sites/dsp.rice.edu/files/cs/CSintro.pdf [ rice.edu ] ) .
Okay , yes , that might be a little bit weighty if you are n't in the field , but I would suggest you check out Nuit Blanche ( http : //nuit-blanche.blogspot.com/ [ blogspot.com ] ) for a description of what exactly CS is , how it works , and what it is useful for .
Today 's article is especially interesting in this regard .</tokentext>
<sentencetext>The /.
headline and the Wired article do tend to misrepresent Compressed Sensing as some kind of noise-remover, despeckler, or image enhancer.
This is simply not the case.
In Compressed Sensing, we are intentionally sampling a signal in an incoherent domain so that each measurement evaluates the entire image globally.
In other words, each sample has as much weight as any other, so when we hold on to fewer of them, we may obtain more information about the original signal than if we sub-sampled the signal in the original domain.
When we reconstruct the original image from our compressed/sub-sampled measurements in an incoherent domain, we are trying to find the most sparse signal that matches the measurements we observed (solving an ill-posed inverse problem via constrained optimization).
The signal sparsity can be thought of the orderedness or "structured-ness" of the signal.
In other words, the most ordered image that matches our compressed measurements is correct solution with high degree of probability.
For a technical primer, check out this paper ( http://dsp.rice.edu/sites/dsp.rice.edu/files/cs/CSintro.pdf [rice.edu] ).
Okay, yes, that might be a little bit weighty if you aren't in the field, but I would suggest you check out Nuit Blanche ( http://nuit-blanche.blogspot.com/ [blogspot.com] ) for a description of what exactly CS is, how it works, and what it is useful for.
Today's article is especially interesting in this regard.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329140</id>
	<title>Holy Bad Acronym Batman</title>
	<author>Anonymous</author>
	<datestamp>1267541880000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext>Did we really need to refer to it as <b>CS</b> in the summary?  A quick glance of the summary could lead one to think that this guy is the inventor of <b>C</b>omputer <b>S</b>cience, rather than the correct <b>C</b>ompressed <b>S</b>ensing...  In the summary of an article that is concerned (in part) with maintaining information after compression, we lost quite a bit of information in abbreviating the name of his algorithm.</htmltext>
<tokenext>Did we really need to refer to it as CS in the summary ?
A quick glance of the summary could lead one to think that this guy is the inventor of Computer Science , rather than the correct Compressed Sensing... In the summary of an article that is concerned ( in part ) with maintaining information after compression , we lost quite a bit of information in abbreviating the name of his algorithm .</tokentext>
<sentencetext>Did we really need to refer to it as CS in the summary?
A quick glance of the summary could lead one to think that this guy is the inventor of Computer Science, rather than the correct Compressed Sensing...  In the summary of an article that is concerned (in part) with maintaining information after compression, we lost quite a bit of information in abbreviating the name of his algorithm.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329090</id>
	<title>Re:I am a bit worried about the "fill in the shape</title>
	<author>Anonymous</author>
	<datestamp>1267541580000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>It is clear that in order for this to work it needs a "model" of the real world.  In his simple case the model is "everything has smooth colours" which matches his test image really well.  Trying to find an unexpected detail in a large image would be impossible with this model.</p><p>However if you have a good model of what you expect then it will probably find it.  Much like voice compression is very efficient because we know what to expect, if you have a good model of what you expect it will reconstruct it from limited data.</p><p>From a legal point of view it is creating what you expect to find from nothing so it may have a tendency to find what you are expecting!  So not much use in court where it just proves your assumptions.</p></htmltext>
<tokenext>It is clear that in order for this to work it needs a " model " of the real world .
In his simple case the model is " everything has smooth colours " which matches his test image really well .
Trying to find an unexpected detail in a large image would be impossible with this model.However if you have a good model of what you expect then it will probably find it .
Much like voice compression is very efficient because we know what to expect , if you have a good model of what you expect it will reconstruct it from limited data.From a legal point of view it is creating what you expect to find from nothing so it may have a tendency to find what you are expecting !
So not much use in court where it just proves your assumptions .</tokentext>
<sentencetext>It is clear that in order for this to work it needs a "model" of the real world.
In his simple case the model is "everything has smooth colours" which matches his test image really well.
Trying to find an unexpected detail in a large image would be impossible with this model.However if you have a good model of what you expect then it will probably find it.
Much like voice compression is very efficient because we know what to expect, if you have a good model of what you expect it will reconstruct it from limited data.From a legal point of view it is creating what you expect to find from nothing so it may have a tendency to find what you are expecting!
So not much use in court where it just proves your assumptions.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328826</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31330682</id>
	<title>Re:Why not...</title>
	<author>ceoyoyo</author>
	<datestamp>1267549080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I might have misunderstood you but I don't think you can properly compare what you're talking about to changing the aperture of a camera and if you could it would be <i>decreasing</i> the aperture (more things in focus), not increasing it.  I think you're also talking about other techniques, such as acquiring the whole lightfield, that might well be made more practical by CS but aren't really the same thing.</p></htmltext>
<tokenext>I might have misunderstood you but I do n't think you can properly compare what you 're talking about to changing the aperture of a camera and if you could it would be decreasing the aperture ( more things in focus ) , not increasing it .
I think you 're also talking about other techniques , such as acquiring the whole lightfield , that might well be made more practical by CS but are n't really the same thing .</tokentext>
<sentencetext>I might have misunderstood you but I don't think you can properly compare what you're talking about to changing the aperture of a camera and if you could it would be decreasing the aperture (more things in focus), not increasing it.
I think you're also talking about other techniques, such as acquiring the whole lightfield, that might well be made more practical by CS but aren't really the same thing.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328992</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328792</id>
	<title>Wouldn't it be easier...</title>
	<author>Anonymous</author>
	<datestamp>1267539720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>to just subscribe to Cinemax instead of going through all this trouble to de-scramble the pr0n?</htmltext>
<tokenext>to just subscribe to Cinemax instead of going through all this trouble to de-scramble the pr0n ?</tokentext>
<sentencetext>to just subscribe to Cinemax instead of going through all this trouble to de-scramble the pr0n?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329202</id>
	<title>Rather like most climate science</title>
	<author>Anonymous</author>
	<datestamp>1267542240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It doesn't add information, it just fills in what you already expected to see.</p></htmltext>
<tokenext>It does n't add information , it just fills in what you already expected to see .</tokentext>
<sentencetext>It doesn't add information, it just fills in what you already expected to see.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328824</id>
	<title>Come again?</title>
	<author>Anonymous</author>
	<datestamp>1267539900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress, why not just save battery power and memory and record 90 percent less data in the first place?<nobr> <wbr></nobr>..</p></div><p>That's what a digital camera is about, isn't it?</p></div>
	</htmltext>
<tokenext>If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress , why not just save battery power and memory and record 90 percent less data in the first place ?
..That 's what a digital camera is about , is n't it ?</tokentext>
<sentencetext>If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress, why not just save battery power and memory and record 90 percent less data in the first place?
..That's what a digital camera is about, isn't it?
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328882</id>
	<title>Where's the plug-in?</title>
	<author>voodoo cheesecake</author>
	<datestamp>1267540200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>It would be nice to have a GIMP plug-in for this.</htmltext>
<tokenext>It would be nice to have a GIMP plug-in for this .</tokentext>
<sentencetext>It would be nice to have a GIMP plug-in for this.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329060</id>
	<title>Useful but don't overdo it</title>
	<author>davidwr</author>
	<datestamp>1267541460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>When it comes to art photography, I for one would rather have a RAW image than a compressed one.</p><p>Why?  What the camera takes is not my final output.  I want to be able to choose what to manipulate and remove.</p><p>Now, for everyday snapshots, there might be something here.  But as others pointed out, it might be less efficient to do the compression in the sensor than the way it's being done today.</p><p>As for other applications, time will tell.</p></htmltext>
<tokenext>When it comes to art photography , I for one would rather have a RAW image than a compressed one.Why ?
What the camera takes is not my final output .
I want to be able to choose what to manipulate and remove.Now , for everyday snapshots , there might be something here .
But as others pointed out , it might be less efficient to do the compression in the sensor than the way it 's being done today.As for other applications , time will tell .</tokentext>
<sentencetext>When it comes to art photography, I for one would rather have a RAW image than a compressed one.Why?
What the camera takes is not my final output.
I want to be able to choose what to manipulate and remove.Now, for everyday snapshots, there might be something here.
But as others pointed out, it might be less efficient to do the compression in the sensor than the way it's being done today.As for other applications, time will tell.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329430</id>
	<title>Re:Useful but don't overdo it</title>
	<author>BetterSense</author>
	<datestamp>1267543380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>When it comes to art photography, I for one would rather have an original film copy that I can choose to scan or optically print rather than only a digital image, raw or otherwise.</htmltext>
<tokenext>When it comes to art photography , I for one would rather have an original film copy that I can choose to scan or optically print rather than only a digital image , raw or otherwise .</tokentext>
<sentencetext>When it comes to art photography, I for one would rather have an original film copy that I can choose to scan or optically print rather than only a digital image, raw or otherwise.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329060</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31333310</id>
	<title>Re:Military applications</title>
	<author>Kanel</author>
	<datestamp>1267559220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>No. Encryption creates "noiselike" data already while the spread-spectrum method of radio transmissions spread the data in different frequencies. But it could still be detected as a source emitting electromagnetic radiation.</p></htmltext>
<tokenext>No .
Encryption creates " noiselike " data already while the spread-spectrum method of radio transmissions spread the data in different frequencies .
But it could still be detected as a source emitting electromagnetic radiation .</tokentext>
<sentencetext>No.
Encryption creates "noiselike" data already while the spread-spectrum method of radio transmissions spread the data in different frequencies.
But it could still be detected as a source emitting electromagnetic radiation.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328842</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31339868</id>
	<title>Re:CSI</title>
	<author>w0mprat</author>
	<datestamp>1267545780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Enhance!</p></div><p>It's not as simple as that. You also need a flashy fake UI on the computer that makes bleepy noises all the time, especially when characters arrive on the screen one by one.</p></div>
	</htmltext>
<tokenext>Enhance ! It 's not as simple as that .
You also need a flashy fake UI on the computer that makes bleepy noises all the time , especially when characters arrive on the screen one by one .</tokentext>
<sentencetext>Enhance!It's not as simple as that.
You also need a flashy fake UI on the computer that makes bleepy noises all the time, especially when characters arrive on the screen one by one.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328740</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31330514</id>
	<title>Re:Why not...</title>
	<author>shabtai87</author>
	<datestamp>1267548420000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>Amusingly enough, the idea of compressed sensing (I will rephrase for clarity) that a minimal sampling is needed for working with high dimensional data that can be described in a much smaller subspace at any given time has been used to describe neural processes in the visual cortex (V1). [See Redwood Center for Theoretical Neuroscience, <a href="https://redwood.berkeley.edu/\%5D" title="berkeley.edu" rel="nofollow">https://redwood.berkeley.edu/\%5D</a> [berkeley.edu]. The lingo used is a bit different than the CS community, but the math is essentially the same. The point being that compressed sensing could lead to answers a lot more natural for human perception than simply canceling out high frequencies.</p><p>Also the point is that CS leads to [near] perfect reconstruction for signals of a certain nature rather than the fuzzyness that comes from some other algorithms that do not take the inherent sparsity of the signal into account.</p></htmltext>
<tokenext>Amusingly enough , the idea of compressed sensing ( I will rephrase for clarity ) that a minimal sampling is needed for working with high dimensional data that can be described in a much smaller subspace at any given time has been used to describe neural processes in the visual cortex ( V1 ) .
[ See Redwood Center for Theoretical Neuroscience , https : //redwood.berkeley.edu/ \ % 5D [ berkeley.edu ] .
The lingo used is a bit different than the CS community , but the math is essentially the same .
The point being that compressed sensing could lead to answers a lot more natural for human perception than simply canceling out high frequencies.Also the point is that CS leads to [ near ] perfect reconstruction for signals of a certain nature rather than the fuzzyness that comes from some other algorithms that do not take the inherent sparsity of the signal into account .</tokentext>
<sentencetext>Amusingly enough, the idea of compressed sensing (I will rephrase for clarity) that a minimal sampling is needed for working with high dimensional data that can be described in a much smaller subspace at any given time has been used to describe neural processes in the visual cortex (V1).
[See Redwood Center for Theoretical Neuroscience, https://redwood.berkeley.edu/\%5D [berkeley.edu].
The lingo used is a bit different than the CS community, but the math is essentially the same.
The point being that compressed sensing could lead to answers a lot more natural for human perception than simply canceling out high frequencies.Also the point is that CS leads to [near] perfect reconstruction for signals of a certain nature rather than the fuzzyness that comes from some other algorithms that do not take the inherent sparsity of the signal into account.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328778</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328842</id>
	<title>Military applications</title>
	<author>rcb1974</author>
	<datestamp>1267540020000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext>The military probably wants the ability to send/receive without revealing the data or the location of its source to the enemy.  For example, its nuclear subs need to surface in order to communicate, and they don't want the enemy to be able to use triangulation to pinpoint the location of the subs.  So, they make the data they're transmitting appear as noise.  That way if the enemy happens to be listening on that frequency, they don't detect anything.</htmltext>
<tokenext>The military probably wants the ability to send/receive without revealing the data or the location of its source to the enemy .
For example , its nuclear subs need to surface in order to communicate , and they do n't want the enemy to be able to use triangulation to pinpoint the location of the subs .
So , they make the data they 're transmitting appear as noise .
That way if the enemy happens to be listening on that frequency , they do n't detect anything .</tokentext>
<sentencetext>The military probably wants the ability to send/receive without revealing the data or the location of its source to the enemy.
For example, its nuclear subs need to surface in order to communicate, and they don't want the enemy to be able to use triangulation to pinpoint the location of the subs.
So, they make the data they're transmitting appear as noise.
That way if the enemy happens to be listening on that frequency, they don't detect anything.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329856</id>
	<title>You can't create something from nothing - can you?</title>
	<author>YourExperiment</author>
	<datestamp>1267545360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>As soon as I read the article, it seemed fishy to me. How can you create data where it doesn't already exist? If you take a scan of a patient, a tumour will either show up or not show up in the data. If it shows up, there's no need for enhancement. If it doesn't show up, no amount of enhancement can cause it to do so.</p><p>Then I came across <a href="http://terrytao.wordpress.com/2007/04/13/compressed-sensing-and-single-pixel-cameras/" title="wordpress.com">this blog post</a> [wordpress.com] by Terence Tao, one of the researchers mentioned in the Wired article.</p><p>It has some very interesting explanations of how this is supposed to work. I'm still not sure that I'm convinced though. Common sense is still screaming at me "this cannot possibly work" - but then that happens with quantum mechanics too.</p></htmltext>
<tokenext>As soon as I read the article , it seemed fishy to me .
How can you create data where it does n't already exist ?
If you take a scan of a patient , a tumour will either show up or not show up in the data .
If it shows up , there 's no need for enhancement .
If it does n't show up , no amount of enhancement can cause it to do so.Then I came across this blog post [ wordpress.com ] by Terence Tao , one of the researchers mentioned in the Wired article.It has some very interesting explanations of how this is supposed to work .
I 'm still not sure that I 'm convinced though .
Common sense is still screaming at me " this can not possibly work " - but then that happens with quantum mechanics too .</tokentext>
<sentencetext>As soon as I read the article, it seemed fishy to me.
How can you create data where it doesn't already exist?
If you take a scan of a patient, a tumour will either show up or not show up in the data.
If it shows up, there's no need for enhancement.
If it doesn't show up, no amount of enhancement can cause it to do so.Then I came across this blog post [wordpress.com] by Terence Tao, one of the researchers mentioned in the Wired article.It has some very interesting explanations of how this is supposed to work.
I'm still not sure that I'm convinced though.
Common sense is still screaming at me "this cannot possibly work" - but then that happens with quantum mechanics too.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328778</id>
	<title>Why not...</title>
	<author>Anonymous</author>
	<datestamp>1267539540000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><blockquote><div><p>If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress, why not just save battery power and memory and record 90 percent less data in the first place?<nobr> <wbr></nobr>..</p></div>
</blockquote><p>
Because it's hard to know what is needed and what isn't to produce a photograph that still looks good to a human, and pushing that computing power down to the camera sensors where power is more limited than a computer is unlikely to save either time or power.</p></div>
	</htmltext>
<tokenext>If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress , why not just save battery power and memory and record 90 percent less data in the first place ?
. . Because it 's hard to know what is needed and what is n't to produce a photograph that still looks good to a human , and pushing that computing power down to the camera sensors where power is more limited than a computer is unlikely to save either time or power .</tokentext>
<sentencetext>If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress, why not just save battery power and memory and record 90 percent less data in the first place?
..

Because it's hard to know what is needed and what isn't to produce a photograph that still looks good to a human, and pushing that computing power down to the camera sensors where power is more limited than a computer is unlikely to save either time or power.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329346</id>
	<title>Re:Holy Bad Acronym Batman</title>
	<author>Dunbal</author>
	<datestamp>1267542960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Aft first I thought he was referring to Credit Suisse. Then I thought no, this is an article about Counter Strike. Then perhaps I thought it meant CS gas. Then perhaps, having been betrayed by an uncooperative context, I thought like you it meant Computer Science. But no - lo and behold "CS" stands for "Compressed Sensing", a new algorithm called "CS" by 1) those working on it and 2) those who have absolutely no idea what it is or how it works, but want to sound cool anyway because hey, what's cooler than using an acronym that ABSOLUTELY NO ONE has ever heard of? Forget the fact that this whole language thing is about "communication" and if you start inserting RA into your MF then NWFU!</p><p>(RA = Random Acronyms, MF = Message Format, NWFU = No-one Will Fucking Understand)</p></htmltext>
<tokenext>Aft first I thought he was referring to Credit Suisse .
Then I thought no , this is an article about Counter Strike .
Then perhaps I thought it meant CS gas .
Then perhaps , having been betrayed by an uncooperative context , I thought like you it meant Computer Science .
But no - lo and behold " CS " stands for " Compressed Sensing " , a new algorithm called " CS " by 1 ) those working on it and 2 ) those who have absolutely no idea what it is or how it works , but want to sound cool anyway because hey , what 's cooler than using an acronym that ABSOLUTELY NO ONE has ever heard of ?
Forget the fact that this whole language thing is about " communication " and if you start inserting RA into your MF then NWFU !
( RA = Random Acronyms , MF = Message Format , NWFU = No-one Will Fucking Understand )</tokentext>
<sentencetext>Aft first I thought he was referring to Credit Suisse.
Then I thought no, this is an article about Counter Strike.
Then perhaps I thought it meant CS gas.
Then perhaps, having been betrayed by an uncooperative context, I thought like you it meant Computer Science.
But no - lo and behold "CS" stands for "Compressed Sensing", a new algorithm called "CS" by 1) those working on it and 2) those who have absolutely no idea what it is or how it works, but want to sound cool anyway because hey, what's cooler than using an acronym that ABSOLUTELY NO ONE has ever heard of?
Forget the fact that this whole language thing is about "communication" and if you start inserting RA into your MF then NWFU!
(RA = Random Acronyms, MF = Message Format, NWFU = No-one Will Fucking Understand)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329140</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31330530</id>
	<title>CuteOverload</title>
	<author>Quiet\_Desperation</author>
	<datestamp>1267548480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Super Redonkulous Fluffhance!</htmltext>
<tokenext>Super Redonkulous Fluffhance !</tokentext>
<sentencetext>Super Redonkulous Fluffhance!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328740</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31330608</id>
	<title>Re:Holy Bad Acronym Batman</title>
	<author>Anonymous</author>
	<datestamp>1267548780000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I thought it stood for Child Staker.  I was excited for a moment.</p></htmltext>
<tokenext>I thought it stood for Child Staker .
I was excited for a moment .</tokentext>
<sentencetext>I thought it stood for Child Staker.
I was excited for a moment.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329346</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31332782</id>
	<title>Single pixel cameras</title>
	<author>Ambitwistor</author>
	<datestamp>1267557180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Compressed sensing is the same mathematics behind the Rice <a href="http://science.slashdot.org/science/06/10/19/2255239.shtml" title="slashdot.org">single pixel camera</a> [slashdot.org] covered on Slashdot a few years ago.</p></htmltext>
<tokenext>Compressed sensing is the same mathematics behind the Rice single pixel camera [ slashdot.org ] covered on Slashdot a few years ago .</tokentext>
<sentencetext>Compressed sensing is the same mathematics behind the Rice single pixel camera [slashdot.org] covered on Slashdot a few years ago.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328944</id>
	<title>Questions...</title>
	<author>mcgrew</author>
	<datestamp>1267540680000</datestamp>
	<modclass>Redundant</modclass>
	<modscore>0</modscore>
	<htmltext><p>Does this only apply to image data, or will we be able to use this to clean up other databases? Will it work with sampled sounds? Names and addresses and inventory?</p><p>More importantly, HOW does it work?</p><p>Sorry of TFA answers these questions, but I've never known Wired to get into any kind of detail on stuff like this.</p></htmltext>
<tokenext>Does this only apply to image data , or will we be able to use this to clean up other databases ?
Will it work with sampled sounds ?
Names and addresses and inventory ? More importantly , HOW does it work ? Sorry of TFA answers these questions , but I 've never known Wired to get into any kind of detail on stuff like this .</tokentext>
<sentencetext>Does this only apply to image data, or will we be able to use this to clean up other databases?
Will it work with sampled sounds?
Names and addresses and inventory?More importantly, HOW does it work?Sorry of TFA answers these questions, but I've never known Wired to get into any kind of detail on stuff like this.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329466</id>
	<title>Other applications</title>
	<author>zmaragdus</author>
	<datestamp>1267543620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I wonder if this can somehow be extended to other forms of data scrubbing besides two-dimensional color images. I've got a waveform capture of a really small, and really noisy, electric motor current that I want scrubbed without losing the shape I think I'm supposed to get out of it.</p></htmltext>
<tokenext>I wonder if this can somehow be extended to other forms of data scrubbing besides two-dimensional color images .
I 've got a waveform capture of a really small , and really noisy , electric motor current that I want scrubbed without losing the shape I think I 'm supposed to get out of it .</tokentext>
<sentencetext>I wonder if this can somehow be extended to other forms of data scrubbing besides two-dimensional color images.
I've got a waveform capture of a really small, and really noisy, electric motor current that I want scrubbed without losing the shape I think I'm supposed to get out of it.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328992</id>
	<title>Re:Why not...</title>
	<author>Anonymous</author>
	<datestamp>1267540920000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext>In fact, it's expected to be used to increase the aperture of cameras. The advantage of this, is that using random patterns you could be able to determine the kernel of the convolving pattern in the picture, therefore, you would be able to re-focus the image after it was taken. In regular photography that kernel is normally Gaussian and very hard to de-blur. But using certain patterns when taking the picture (probably implemented as micro-mirrors), you could, easily do this in post processing.</htmltext>
<tokenext>In fact , it 's expected to be used to increase the aperture of cameras .
The advantage of this , is that using random patterns you could be able to determine the kernel of the convolving pattern in the picture , therefore , you would be able to re-focus the image after it was taken .
In regular photography that kernel is normally Gaussian and very hard to de-blur .
But using certain patterns when taking the picture ( probably implemented as micro-mirrors ) , you could , easily do this in post processing .</tokentext>
<sentencetext>In fact, it's expected to be used to increase the aperture of cameras.
The advantage of this, is that using random patterns you could be able to determine the kernel of the convolving pattern in the picture, therefore, you would be able to re-focus the image after it was taken.
In regular photography that kernel is normally Gaussian and very hard to de-blur.
But using certain patterns when taking the picture (probably implemented as micro-mirrors), you could, easily do this in post processing.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328888</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329098</id>
	<title>Re:I am a bit worried about the "fill in the shape</title>
	<author>Anonymous</author>
	<datestamp>1267541580000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The Medical Imaging has enough "artefacts" in the image as it is.</p></htmltext>
<tokenext>The Medical Imaging has enough " artefacts " in the image as it is .</tokentext>
<sentencetext>The Medical Imaging has enough "artefacts" in the image as it is.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328826</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31339676</id>
	<title>Re:What if you feed it noise?</title>
	<author>kramulous</author>
	<datestamp>1267544220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If we get goatse (or whatever) I'm gonna be pretty pissed at nature.</p></htmltext>
<tokenext>If we get goatse ( or whatever ) I 'm gon na be pretty pissed at nature .</tokentext>
<sentencetext>If we get goatse (or whatever) I'm gonna be pretty pissed at nature.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329020</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329442</id>
	<title>Re:Holy Bad Acronym Batman</title>
	<author>unitron</author>
	<datestamp>1267543500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Yes, but a quick application of the Compressed Sensing Algorithm to the lettters <b>CS</b> will shortly reveal that it stands for Compressed Sensing.</p><p>If it stood for Computer Science instead, the algorithm would have been able to sense that, in a compressed sort of way.</p></htmltext>
<tokenext>Yes , but a quick application of the Compressed Sensing Algorithm to the lettters CS will shortly reveal that it stands for Compressed Sensing.If it stood for Computer Science instead , the algorithm would have been able to sense that , in a compressed sort of way .</tokentext>
<sentencetext>Yes, but a quick application of the Compressed Sensing Algorithm to the lettters CS will shortly reveal that it stands for Compressed Sensing.If it stood for Computer Science instead, the algorithm would have been able to sense that, in a compressed sort of way.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329140</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329802</id>
	<title>Re:Why not...</title>
	<author>wfolta</author>
	<datestamp>1267545120000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Actually, you don't process and throw away information. You are not Sensing and then Compressing, you are Compressed Sensing, so you take in less data in the first place.</p><p>A canonical example is a 1-pixel camera that uses a grid of micro-mirrors, each of which can be set to reflect onto the pixel or not. By setting the grid randomly, you are essentially doing a Random Projection of the data before it's recorded, so you are Compressed Sensing. With a sufficient number of these 1-pixel images, each with a different random mirror setup  you can reproduce the original image to some level of accuracy, using fewer bits than a JPEG/etc of similar quality. Unlike JPEG, you are not taking in a full set of data, then compressing, so it takes LESS processing power, not more.</p><p>So you save in image transmission bandwidth if the sensor is, say, orbiting Jupiter. And you save energy expended in compressing the image. And you could perhaps afford to make a VERY expensive single pixel imager that has an incredibly wide frequency range, which might be prohibitively expensive, or even impossible to fabricate in a larger array.</p><p>Personally, I think there's a lot of hype to CS, but it's definitely not the same as JPEG/Wavelet/etc compression after taking a full-resolution image.</p></htmltext>
<tokenext>Actually , you do n't process and throw away information .
You are not Sensing and then Compressing , you are Compressed Sensing , so you take in less data in the first place.A canonical example is a 1-pixel camera that uses a grid of micro-mirrors , each of which can be set to reflect onto the pixel or not .
By setting the grid randomly , you are essentially doing a Random Projection of the data before it 's recorded , so you are Compressed Sensing .
With a sufficient number of these 1-pixel images , each with a different random mirror setup you can reproduce the original image to some level of accuracy , using fewer bits than a JPEG/etc of similar quality .
Unlike JPEG , you are not taking in a full set of data , then compressing , so it takes LESS processing power , not more.So you save in image transmission bandwidth if the sensor is , say , orbiting Jupiter .
And you save energy expended in compressing the image .
And you could perhaps afford to make a VERY expensive single pixel imager that has an incredibly wide frequency range , which might be prohibitively expensive , or even impossible to fabricate in a larger array.Personally , I think there 's a lot of hype to CS , but it 's definitely not the same as JPEG/Wavelet/etc compression after taking a full-resolution image .</tokentext>
<sentencetext>Actually, you don't process and throw away information.
You are not Sensing and then Compressing, you are Compressed Sensing, so you take in less data in the first place.A canonical example is a 1-pixel camera that uses a grid of micro-mirrors, each of which can be set to reflect onto the pixel or not.
By setting the grid randomly, you are essentially doing a Random Projection of the data before it's recorded, so you are Compressed Sensing.
With a sufficient number of these 1-pixel images, each with a different random mirror setup  you can reproduce the original image to some level of accuracy, using fewer bits than a JPEG/etc of similar quality.
Unlike JPEG, you are not taking in a full set of data, then compressing, so it takes LESS processing power, not more.So you save in image transmission bandwidth if the sensor is, say, orbiting Jupiter.
And you save energy expended in compressing the image.
And you could perhaps afford to make a VERY expensive single pixel imager that has an incredibly wide frequency range, which might be prohibitively expensive, or even impossible to fabricate in a larger array.Personally, I think there's a lot of hype to CS, but it's definitely not the same as JPEG/Wavelet/etc compression after taking a full-resolution image.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328778</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329526</id>
	<title>Re:Why not...</title>
	<author>gravis777</author>
	<datestamp>1267543800000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Truthfully, I was thinking along the lines of taking a high resolution camera and making it better, rather than taking a low resolution camera and making it high. My aging Nikon is a 7.1 megapixel, with only a 3x optical zoom. There have been times I wanted to take a picture of something quick, so do not necessaraly have time to zoom or move closer to the object. After cropping, I may end up with a 1-2 megapixel image (sometimes much lower). For the longest, I thought I just needed more megapixels, and a faster and higher powered optical zoom. However, looking at the pictures I have, I am like, if someone could just come up with something to make this look better... There is usually plenty of detail there for my eye, if something would come in and soften jaggie edges, sharpen the overall picture, and understand textures (such as clothing)...</p><p>Truthfully, with what I just talked about, I am looking for them to implement this in Photoshop so I can clean up some existing crappy photography of mine.</p></htmltext>
<tokenext>Truthfully , I was thinking along the lines of taking a high resolution camera and making it better , rather than taking a low resolution camera and making it high .
My aging Nikon is a 7.1 megapixel , with only a 3x optical zoom .
There have been times I wanted to take a picture of something quick , so do not necessaraly have time to zoom or move closer to the object .
After cropping , I may end up with a 1-2 megapixel image ( sometimes much lower ) .
For the longest , I thought I just needed more megapixels , and a faster and higher powered optical zoom .
However , looking at the pictures I have , I am like , if someone could just come up with something to make this look better... There is usually plenty of detail there for my eye , if something would come in and soften jaggie edges , sharpen the overall picture , and understand textures ( such as clothing ) ...Truthfully , with what I just talked about , I am looking for them to implement this in Photoshop so I can clean up some existing crappy photography of mine .</tokentext>
<sentencetext>Truthfully, I was thinking along the lines of taking a high resolution camera and making it better, rather than taking a low resolution camera and making it high.
My aging Nikon is a 7.1 megapixel, with only a 3x optical zoom.
There have been times I wanted to take a picture of something quick, so do not necessaraly have time to zoom or move closer to the object.
After cropping, I may end up with a 1-2 megapixel image (sometimes much lower).
For the longest, I thought I just needed more megapixels, and a faster and higher powered optical zoom.
However, looking at the pictures I have, I am like, if someone could just come up with something to make this look better... There is usually plenty of detail there for my eye, if something would come in and soften jaggie edges, sharpen the overall picture, and understand textures (such as clothing)...Truthfully, with what I just talked about, I am looking for them to implement this in Photoshop so I can clean up some existing crappy photography of mine.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328888</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31331678</id>
	<title>Re:Wrong.</title>
	<author>Anonymous</author>
	<datestamp>1267553460000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Ooo, look at me. I've posted on Slahdot. I'm so smart. I'm even smarter than the scientist doing actual work. And I do it all from my mommy's basement.</htmltext>
<tokenext>Ooo , look at me .
I 've posted on Slahdot .
I 'm so smart .
I 'm even smarter than the scientist doing actual work .
And I do it all from my mommy 's basement .</tokentext>
<sentencetext>Ooo, look at me.
I've posted on Slahdot.
I'm so smart.
I'm even smarter than the scientist doing actual work.
And I do it all from my mommy's basement.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329238</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328962</id>
	<title>Re:CSI</title>
	<author>Anonymous</author>
	<datestamp>1267540800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Seriously, watching a CS reconstruction is actually visually more impressive than what they do on CS.  I coded up a demo and everyone calls it the magic algorithm.</p></htmltext>
<tokenext>Seriously , watching a CS reconstruction is actually visually more impressive than what they do on CS .
I coded up a demo and everyone calls it the magic algorithm .</tokentext>
<sentencetext>Seriously, watching a CS reconstruction is actually visually more impressive than what they do on CS.
I coded up a demo and everyone calls it the magic algorithm.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328740</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329830</id>
	<title>Caution: don't mis-apply this idea!</title>
	<author>MessyBlob</author>
	<datestamp>1267545180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>From the referenced reports, it looks like people might get the wrong idea about the possible applications. This algorithm starts with discrete data points with gaps in-between, and works out the remaining arbitrary data points in a pleasing way, as if it were a continuous field (represented as a fourier transform, for example). </p><p>In other words, it works with data where the signal is already separated from the noise. My last sentence is crucial for an understanding of the possible applications: it will not infer elements that are absent in the measured signal, but will instead repeat elements that are already present. I expect this story will be mis-reported in future, by reporters who do not understand how it really works (and I might count myself in that, as I've only glanced at a couple of the arxiv papers).</p></htmltext>
<tokenext>From the referenced reports , it looks like people might get the wrong idea about the possible applications .
This algorithm starts with discrete data points with gaps in-between , and works out the remaining arbitrary data points in a pleasing way , as if it were a continuous field ( represented as a fourier transform , for example ) .
In other words , it works with data where the signal is already separated from the noise .
My last sentence is crucial for an understanding of the possible applications : it will not infer elements that are absent in the measured signal , but will instead repeat elements that are already present .
I expect this story will be mis-reported in future , by reporters who do not understand how it really works ( and I might count myself in that , as I 've only glanced at a couple of the arxiv papers ) .</tokentext>
<sentencetext>From the referenced reports, it looks like people might get the wrong idea about the possible applications.
This algorithm starts with discrete data points with gaps in-between, and works out the remaining arbitrary data points in a pleasing way, as if it were a continuous field (represented as a fourier transform, for example).
In other words, it works with data where the signal is already separated from the noise.
My last sentence is crucial for an understanding of the possible applications: it will not infer elements that are absent in the measured signal, but will instead repeat elements that are already present.
I expect this story will be mis-reported in future, by reporters who do not understand how it really works (and I might count myself in that, as I've only glanced at a couple of the arxiv papers).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328830</id>
	<title>Compressed message</title>
	<author>Anonymous</author>
	<datestamp>1267539960000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>fgr ts ot no bch!</p></htmltext>
<tokenext>fgr ts ot no bch !</tokentext>
<sentencetext>fgr ts ot no bch!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31333686</id>
	<title>Re:Why not...</title>
	<author>ceoyoyo</author>
	<datestamp>1267560720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>What you really need is a better (bigger, heavier) lens.  In most cameras post-megapixel race the maximum angular resolution is usually limited by the lens, not the sensor resolution.  CS and/or sensor upgrades can't correct for that because the information doesn't actually make it through the glass to be recorded.</p><p>If you just want to make those pictures look better, you can probably get some good results with some of Photoshop's edge enhancing and sharpening filters.  CS also makes a wicked noise filter (noise is not sparse and so is suppressed by CS) so it might be able to help you there.</p></htmltext>
<tokenext>What you really need is a better ( bigger , heavier ) lens .
In most cameras post-megapixel race the maximum angular resolution is usually limited by the lens , not the sensor resolution .
CS and/or sensor upgrades ca n't correct for that because the information does n't actually make it through the glass to be recorded.If you just want to make those pictures look better , you can probably get some good results with some of Photoshop 's edge enhancing and sharpening filters .
CS also makes a wicked noise filter ( noise is not sparse and so is suppressed by CS ) so it might be able to help you there .</tokentext>
<sentencetext>What you really need is a better (bigger, heavier) lens.
In most cameras post-megapixel race the maximum angular resolution is usually limited by the lens, not the sensor resolution.
CS and/or sensor upgrades can't correct for that because the information doesn't actually make it through the glass to be recorded.If you just want to make those pictures look better, you can probably get some good results with some of Photoshop's edge enhancing and sharpening filters.
CS also makes a wicked noise filter (noise is not sparse and so is suppressed by CS) so it might be able to help you there.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329526</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328856</id>
	<title>Re:Why not...</title>
	<author>Anonymous</author>
	<datestamp>1267540080000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext>I think you are missing the point, throwing away 90\% of the image was a demonstration of the capabilities of this algorithm. You would use it where you have only managed to capture a small amount of data, not capture the lot and throw away 90\%.</htmltext>
<tokenext>I think you are missing the point , throwing away 90 \ % of the image was a demonstration of the capabilities of this algorithm .
You would use it where you have only managed to capture a small amount of data , not capture the lot and throw away 90 \ % .</tokentext>
<sentencetext>I think you are missing the point, throwing away 90\% of the image was a demonstration of the capabilities of this algorithm.
You would use it where you have only managed to capture a small amount of data, not capture the lot and throw away 90\%.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328778</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31331624</id>
	<title>Re:CSI</title>
	<author>Anonymous</author>
	<datestamp>1267553220000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>While some of the things they do on that show are hilariously stupid, it isn't that far from the truth.</p><p>Take a grid of pixels, say, a cube with an outline, and a diagonal split to form 2 internal triangles.<br>You can zoom to infinity on that image while keeping it as an outline simple by converting everything to vectors and doing a smart smooth on everything.<br>Edge Detection algorithms have become pretty impressive over the years.<br>In fact, even Windows XP comes with a pretty decent zoom, especially considering the age of it.  That is just average in comparison to the really good ones.<br>The only problem is the really really good ones require a lot of resources.</p><p>In most images, there is a lot of hidden data "between" pixels that do bounce back and fourth due to scatter and noise.<br>With several frames, you have more of a chance of detecting the noise between pixels.<br>Of course, since a lot of data storage these days is lossy, this isn't going to matter anyway... lossy CCTV, AMAZING IDEA.</p><p>In fact, i am pretty sure there was some sort of "search engine" thing that allows you to draw a simple shape of things, describe each polygon and it will search for  images to fill in those blanks.<br>Forgot the name of it though.  Pretty sure it was posted on here.</p></htmltext>
<tokenext>While some of the things they do on that show are hilariously stupid , it is n't that far from the truth.Take a grid of pixels , say , a cube with an outline , and a diagonal split to form 2 internal triangles.You can zoom to infinity on that image while keeping it as an outline simple by converting everything to vectors and doing a smart smooth on everything.Edge Detection algorithms have become pretty impressive over the years.In fact , even Windows XP comes with a pretty decent zoom , especially considering the age of it .
That is just average in comparison to the really good ones.The only problem is the really really good ones require a lot of resources.In most images , there is a lot of hidden data " between " pixels that do bounce back and fourth due to scatter and noise.With several frames , you have more of a chance of detecting the noise between pixels.Of course , since a lot of data storage these days is lossy , this is n't going to matter anyway... lossy CCTV , AMAZING IDEA.In fact , i am pretty sure there was some sort of " search engine " thing that allows you to draw a simple shape of things , describe each polygon and it will search for images to fill in those blanks.Forgot the name of it though .
Pretty sure it was posted on here .</tokentext>
<sentencetext>While some of the things they do on that show are hilariously stupid, it isn't that far from the truth.Take a grid of pixels, say, a cube with an outline, and a diagonal split to form 2 internal triangles.You can zoom to infinity on that image while keeping it as an outline simple by converting everything to vectors and doing a smart smooth on everything.Edge Detection algorithms have become pretty impressive over the years.In fact, even Windows XP comes with a pretty decent zoom, especially considering the age of it.
That is just average in comparison to the really good ones.The only problem is the really really good ones require a lot of resources.In most images, there is a lot of hidden data "between" pixels that do bounce back and fourth due to scatter and noise.With several frames, you have more of a chance of detecting the noise between pixels.Of course, since a lot of data storage these days is lossy, this isn't going to matter anyway... lossy CCTV, AMAZING IDEA.In fact, i am pretty sure there was some sort of "search engine" thing that allows you to draw a simple shape of things, describe each polygon and it will search for  images to fill in those blanks.Forgot the name of it though.
Pretty sure it was posted on here.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328740</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31330698</id>
	<title>Re:Military applications</title>
	<author>cxx</author>
	<datestamp>1267549140000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Although this might be one application, it's more likely that the military would want to use it for intelligence gathering: imagine how much information is out there, if only we could separate it out from the noise?</p><p>Reading the article, however, I'm not sure that this by itself would serve much purpose even then, though, given digital communications.  Someone feel free to correct me if I'm wrong on that point.</p></htmltext>
<tokenext>Although this might be one application , it 's more likely that the military would want to use it for intelligence gathering : imagine how much information is out there , if only we could separate it out from the noise ? Reading the article , however , I 'm not sure that this by itself would serve much purpose even then , though , given digital communications .
Someone feel free to correct me if I 'm wrong on that point .</tokentext>
<sentencetext>Although this might be one application, it's more likely that the military would want to use it for intelligence gathering: imagine how much information is out there, if only we could separate it out from the noise?Reading the article, however, I'm not sure that this by itself would serve much purpose even then, though, given digital communications.
Someone feel free to correct me if I'm wrong on that point.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328842</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329548</id>
	<title>Re:Demo image</title>
	<author>Anonymous</author>
	<datestamp>1267543920000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>It absolutely could be, just read the article: "Eventually it creates an image that will <b>almost certainly</b> be a near-perfect facsimile of a hi-res one."!</p></htmltext>
<tokenext>It absolutely could be , just read the article : " Eventually it creates an image that will almost certainly be a near-perfect facsimile of a hi-res one .
" !</tokentext>
<sentencetext>It absolutely could be, just read the article: "Eventually it creates an image that will almost certainly be a near-perfect facsimile of a hi-res one.
"!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328864</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329576</id>
	<title>Re:Questions...</title>
	<author>Bakkster</author>
	<datestamp>1267544040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>More importantly, HOW does it work?</p><p>Sorry of TFA answers these questions, but I've never known Wired to get into any kind of detail on stuff like this.</p></div><p>From TFA:</p><p><div class="quote"><p>The key to finding the single correct representation is a notion called sparsity, a mathematical way of describing an image&rsquo;s complexity, or lack thereof. A picture made up of a few simple, understandable elements &mdash; like solid blocks of color or wiggly lines &mdash; is sparse; a screenful of random, chaotic dots is not. It turns out that out of all the bazillion possible reconstructions, the simplest, or sparsest, image is almost always the right one or very close to it.</p></div><p>So any dataset that is likely to be smooth can be improved with this technique.  They give the example in TFA of piano music (except for percussion, the frequencies present are consistent for a significant period of time).  Names, addresses, and inventory are for all intents and purposes here random.  You can't determine the address of someone in a database by looking at the adjacent entries.</p></div>
	</htmltext>
<tokenext>More importantly , HOW does it work ? Sorry of TFA answers these questions , but I 've never known Wired to get into any kind of detail on stuff like this.From TFA : The key to finding the single correct representation is a notion called sparsity , a mathematical way of describing an image    s complexity , or lack thereof .
A picture made up of a few simple , understandable elements    like solid blocks of color or wiggly lines    is sparse ; a screenful of random , chaotic dots is not .
It turns out that out of all the bazillion possible reconstructions , the simplest , or sparsest , image is almost always the right one or very close to it.So any dataset that is likely to be smooth can be improved with this technique .
They give the example in TFA of piano music ( except for percussion , the frequencies present are consistent for a significant period of time ) .
Names , addresses , and inventory are for all intents and purposes here random .
You ca n't determine the address of someone in a database by looking at the adjacent entries .</tokentext>
<sentencetext>More importantly, HOW does it work?Sorry of TFA answers these questions, but I've never known Wired to get into any kind of detail on stuff like this.From TFA:The key to finding the single correct representation is a notion called sparsity, a mathematical way of describing an image’s complexity, or lack thereof.
A picture made up of a few simple, understandable elements — like solid blocks of color or wiggly lines — is sparse; a screenful of random, chaotic dots is not.
It turns out that out of all the bazillion possible reconstructions, the simplest, or sparsest, image is almost always the right one or very close to it.So any dataset that is likely to be smooth can be improved with this technique.
They give the example in TFA of piano music (except for percussion, the frequencies present are consistent for a significant period of time).
Names, addresses, and inventory are for all intents and purposes here random.
You can't determine the address of someone in a database by looking at the adjacent entries.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328944</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31332686</id>
	<title>Not for my MRI thank you.</title>
	<author>Thanatiel</author>
	<datestamp>1267556760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>To clear-up and guess "details" in such a manner that a picture, wave, music, whatever can be seen or heard more easily by a human is very nice.  Good for old pictures and sounds.  I can even buy garbled culprit face reconstruction as long as it cannot be used as proof in a court.  This sounds like a new, must-have, expensive, photoshop/gimp filter and congratulations.</p><p>But<nobr> <wbr></nobr>...</p><p>Anybody doing anything serious would use a secure, ciphered, way of communication.  Not clear text, clear waves, or screen fonts/colors easy to "measure" electromagnetically from afar.  So the eavesdropping enemy communication does not, in my humble opinion, hold. (Maybe a century ago)</p><p>And last but not least. The bigger issue is that it does not show the most important thing: reality.<br>Nobody can create reality from a subset.  You can be smart as a monkey* but you can only guess, presume, imagine what's missing.<br>If an MRI is taken from any part of my body, I want ALL the REAL dots there.  Even one missing dot could actually be something serious (Will it be guessed ? Not guessed ? Just my luck.).  And a wrongly guessed one could make me panic enough to give me a serious heart condition.  So no thank you.</p><p>*:this sounds better in my native tongue</p></htmltext>
<tokenext>To clear-up and guess " details " in such a manner that a picture , wave , music , whatever can be seen or heard more easily by a human is very nice .
Good for old pictures and sounds .
I can even buy garbled culprit face reconstruction as long as it can not be used as proof in a court .
This sounds like a new , must-have , expensive , photoshop/gimp filter and congratulations.But ...Anybody doing anything serious would use a secure , ciphered , way of communication .
Not clear text , clear waves , or screen fonts/colors easy to " measure " electromagnetically from afar .
So the eavesdropping enemy communication does not , in my humble opinion , hold .
( Maybe a century ago ) And last but not least .
The bigger issue is that it does not show the most important thing : reality.Nobody can create reality from a subset .
You can be smart as a monkey * but you can only guess , presume , imagine what 's missing.If an MRI is taken from any part of my body , I want ALL the REAL dots there .
Even one missing dot could actually be something serious ( Will it be guessed ?
Not guessed ?
Just my luck. ) .
And a wrongly guessed one could make me panic enough to give me a serious heart condition .
So no thank you .
* : this sounds better in my native tongue</tokentext>
<sentencetext>To clear-up and guess "details" in such a manner that a picture, wave, music, whatever can be seen or heard more easily by a human is very nice.
Good for old pictures and sounds.
I can even buy garbled culprit face reconstruction as long as it cannot be used as proof in a court.
This sounds like a new, must-have, expensive, photoshop/gimp filter and congratulations.But ...Anybody doing anything serious would use a secure, ciphered, way of communication.
Not clear text, clear waves, or screen fonts/colors easy to "measure" electromagnetically from afar.
So the eavesdropping enemy communication does not, in my humble opinion, hold.
(Maybe a century ago)And last but not least.
The bigger issue is that it does not show the most important thing: reality.Nobody can create reality from a subset.
You can be smart as a monkey* but you can only guess, presume, imagine what's missing.If an MRI is taken from any part of my body, I want ALL the REAL dots there.
Even one missing dot could actually be something serious (Will it be guessed ?
Not guessed ?
Just my luck.).
And a wrongly guessed one could make me panic enough to give me a serious heart condition.
So no thank you.
*:this sounds better in my native tongue</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329402</id>
	<title>Re:CSI</title>
	<author>Anonymous</author>
	<datestamp>1267543200000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Which keyboard button did you choose for the Kill Switch?</p></htmltext>
<tokenext>Which keyboard button did you choose for the Kill Switch ?</tokentext>
<sentencetext>Which keyboard button did you choose for the Kill Switch?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328962</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329902</id>
	<title>Re:Why not...</title>
	<author>SQLGuru</author>
	<datestamp>1267545600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>And in fact, were that camera orbiting Jupiter, it would only have to send the 10\% data back to Earth where the reconstruction could take place.  It turns into "real-time" compression.</p></htmltext>
<tokenext>And in fact , were that camera orbiting Jupiter , it would only have to send the 10 \ % data back to Earth where the reconstruction could take place .
It turns into " real-time " compression .</tokentext>
<sentencetext>And in fact, were that camera orbiting Jupiter, it would only have to send the 10\% data back to Earth where the reconstruction could take place.
It turns into "real-time" compression.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328888</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329898</id>
	<title>Re:I am a bit worried about the "fill in the shape</title>
	<author>ascari</author>
	<datestamp>1267545600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>In the old movie "The Conversation" Gene Hackman walks right into that trap when he infers away all the nuances inside the spotty data of a surveillance recording. Two lessons: 1 - Same dangers, different application. 2 - Same fundamental method, different decade, nothing really new here.</htmltext>
<tokenext>In the old movie " The Conversation " Gene Hackman walks right into that trap when he infers away all the nuances inside the spotty data of a surveillance recording .
Two lessons : 1 - Same dangers , different application .
2 - Same fundamental method , different decade , nothing really new here .</tokentext>
<sentencetext>In the old movie "The Conversation" Gene Hackman walks right into that trap when he infers away all the nuances inside the spotty data of a surveillance recording.
Two lessons: 1 - Same dangers, different application.
2 - Same fundamental method, different decade, nothing really new here.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328826</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329692</id>
	<title>Re:I am a bit worried about the "fill in the shape</title>
	<author>Anonymous</author>
	<datestamp>1267544640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>The thing is in a medical image couldn't that actually remove a small
growth or lesion?</p></div>
</blockquote><p>

While I'm certainly no expect on this, it seems almost everyone here
is being mislead by the word "noise".  From what I gather, this
is <b>not</b> cleaning up noise, it is filling in missing pieces
in data whose samples are assumed to be noise-free.  This is
drastically different from "smoothing" that is intended to
filter out noise.

</p><p>
So, in the case of a small growth or lesion, as long as there is
at least one sample of it that is different from the surrounding
area, the "sparsity" (this is my guess based on a quick
reading of the article and some related ones) would result in an
identifiable spot of some kind.  This would be due
to the fact that that the one pixel sample of the lesion
is different from its closest available neighbors.  This difference
would be assumed by the algorithm
to be an accurate representation of that pixel, not a random
speck of noise.  So, something would show up, say a small blob,
that would be obviously different in the reconstructed image.
Now the less pixels you have of this lesion, the less accurate
the shape and size of that blob will be, but nonetheless it is
something that would stand out and warrant further investigation.</p></div>
	</htmltext>
<tokenext>The thing is in a medical image could n't that actually remove a small growth or lesion ?
While I 'm certainly no expect on this , it seems almost everyone here is being mislead by the word " noise " .
From what I gather , this is not cleaning up noise , it is filling in missing pieces in data whose samples are assumed to be noise-free .
This is drastically different from " smoothing " that is intended to filter out noise .
So , in the case of a small growth or lesion , as long as there is at least one sample of it that is different from the surrounding area , the " sparsity " ( this is my guess based on a quick reading of the article and some related ones ) would result in an identifiable spot of some kind .
This would be due to the fact that that the one pixel sample of the lesion is different from its closest available neighbors .
This difference would be assumed by the algorithm to be an accurate representation of that pixel , not a random speck of noise .
So , something would show up , say a small blob , that would be obviously different in the reconstructed image .
Now the less pixels you have of this lesion , the less accurate the shape and size of that blob will be , but nonetheless it is something that would stand out and warrant further investigation .</tokentext>
<sentencetext>The thing is in a medical image couldn't that actually remove a small
growth or lesion?
While I'm certainly no expect on this, it seems almost everyone here
is being mislead by the word "noise".
From what I gather, this
is not cleaning up noise, it is filling in missing pieces
in data whose samples are assumed to be noise-free.
This is
drastically different from "smoothing" that is intended to
filter out noise.
So, in the case of a small growth or lesion, as long as there is
at least one sample of it that is different from the surrounding
area, the "sparsity" (this is my guess based on a quick
reading of the article and some related ones) would result in an
identifiable spot of some kind.
This would be due
to the fact that that the one pixel sample of the lesion
is different from its closest available neighbors.
This difference
would be assumed by the algorithm
to be an accurate representation of that pixel, not a random
speck of noise.
So, something would show up, say a small blob,
that would be obviously different in the reconstructed image.
Now the less pixels you have of this lesion, the less accurate
the shape and size of that blob will be, but nonetheless it is
something that would stand out and warrant further investigation.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328826</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328888</id>
	<title>Re:Why not...</title>
	<author>eldavojohn</author>
	<datestamp>1267540260000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p><div class="quote"><blockquote><div><p>If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress, why not just save battery power and memory and record 90 percent less data in the first place?<nobr> <wbr></nobr>..</p></div></blockquote><p>
Because it's hard to know what is needed and what isn't to produce a photograph that still looks good to a human, and pushing that computing power down to the camera sensors where power is more limited than a computer is unlikely to save either time or power.</p></div><p>If you read the article, the rest of that quote makes a lot more sense.  Here it is in context:</p><p><div class="quote"><p>If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress, why not just save battery power and memory and record 90 percent less data in the first place? For digital snapshots of your kids, battery waste may not matter much; you just plug in and recharge. &ldquo;But when the battery is orbiting Jupiter,&rdquo; Cand&#232;s says, &ldquo;it&rsquo;s a different story.&rdquo; Ditto if you want your camera to snap a photo with a trillion pixels instead of a few million.</p></div><p>So, while this strategy might not be implemented in my Canon Powershot anytime soon, it sounds like a really great idea for exploration or just limited resources in general.  I was thinking more along the lines of making really crappy resolution low power cameras that are very cheap but distributing them with this software that takes the images on your computer and processes them to make them highly defined images.</p></div>
	</htmltext>
<tokenext>If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress , why not just save battery power and memory and record 90 percent less data in the first place ?
. . Because it 's hard to know what is needed and what is n't to produce a photograph that still looks good to a human , and pushing that computing power down to the camera sensors where power is more limited than a computer is unlikely to save either time or power.If you read the article , the rest of that quote makes a lot more sense .
Here it is in context : If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress , why not just save battery power and memory and record 90 percent less data in the first place ?
For digital snapshots of your kids , battery waste may not matter much ; you just plug in and recharge .
   But when the battery is orbiting Jupiter ,    Cand   s says ,    it    s a different story.    Ditto if you want your camera to snap a photo with a trillion pixels instead of a few million.So , while this strategy might not be implemented in my Canon Powershot anytime soon , it sounds like a really great idea for exploration or just limited resources in general .
I was thinking more along the lines of making really crappy resolution low power cameras that are very cheap but distributing them with this software that takes the images on your computer and processes them to make them highly defined images .</tokentext>
<sentencetext>If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress, why not just save battery power and memory and record 90 percent less data in the first place?
..
Because it's hard to know what is needed and what isn't to produce a photograph that still looks good to a human, and pushing that computing power down to the camera sensors where power is more limited than a computer is unlikely to save either time or power.If you read the article, the rest of that quote makes a lot more sense.
Here it is in context:If your camera is going to record a vast amount of data only to throw away 90 percent of it when you compress, why not just save battery power and memory and record 90 percent less data in the first place?
For digital snapshots of your kids, battery waste may not matter much; you just plug in and recharge.
“But when the battery is orbiting Jupiter,” Candès says, “it’s a different story.” Ditto if you want your camera to snap a photo with a trillion pixels instead of a few million.So, while this strategy might not be implemented in my Canon Powershot anytime soon, it sounds like a really great idea for exploration or just limited resources in general.
I was thinking more along the lines of making really crappy resolution low power cameras that are very cheap but distributing them with this software that takes the images on your computer and processes them to make them highly defined images.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328778</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329252</id>
	<title>Re:CSI</title>
	<author>halcyon1234</author>
	<datestamp>1267542540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>[geek mode]</p><p>It actually reminds me more of that ST:TNG episode with Yuta. They're able to take a picture with someone's face half-blocked out by scenery and other people. They're able to reconstruct the rest of the face based on the patterns that are there.</p></htmltext>
<tokenext>[ geek mode ] It actually reminds me more of that ST : TNG episode with Yuta .
They 're able to take a picture with someone 's face half-blocked out by scenery and other people .
They 're able to reconstruct the rest of the face based on the patterns that are there .</tokentext>
<sentencetext>[geek mode]It actually reminds me more of that ST:TNG episode with Yuta.
They're able to take a picture with someone's face half-blocked out by scenery and other people.
They're able to reconstruct the rest of the face based on the patterns that are there.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328740</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328864</id>
	<title>Demo image</title>
	<author>Anonymous</author>
	<datestamp>1267540140000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>I seriously doubt that the Obama demo image is real. There is no way that the teeth and the little badge on his jacket are produced, and that no visual artifacts were created.</p></htmltext>
<tokenext>I seriously doubt that the Obama demo image is real .
There is no way that the teeth and the little badge on his jacket are produced , and that no visual artifacts were created .</tokentext>
<sentencetext>I seriously doubt that the Obama demo image is real.
There is no way that the teeth and the little badge on his jacket are produced, and that no visual artifacts were created.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329356</id>
	<title>Possibly fraud</title>
	<author>junglebeast</author>
	<datestamp>1267542960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I could not find any examples showing similar image reconstructions on Jarvis Haupt or Robert Nowak's websites/publication histories -- the researchers credited with the Obama restoration photo.</p><p>Therefore, I am skeptical that this wired article is not to be trusted.</p></htmltext>
<tokenext>I could not find any examples showing similar image reconstructions on Jarvis Haupt or Robert Nowak 's websites/publication histories -- the researchers credited with the Obama restoration photo.Therefore , I am skeptical that this wired article is not to be trusted .</tokentext>
<sentencetext>I could not find any examples showing similar image reconstructions on Jarvis Haupt or Robert Nowak's websites/publication histories -- the researchers credited with the Obama restoration photo.Therefore, I am skeptical that this wired article is not to be trusted.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328740</id>
	<title>CSI</title>
	<author>Anonymous</author>
	<datestamp>1267539300000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext>Enhance!</htmltext>
<tokenext>Enhance !</tokentext>
<sentencetext>Enhance!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328854</id>
	<title>applications</title>
	<author>Anonymous</author>
	<datestamp>1267540080000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Digital photography - compensate for noisy sensors.</p><p>Code breaking</p><p>Code making</p><p>telecommunications</p><p>video compression</p><p>I see some really interesting products coming down the line.</p></htmltext>
<tokenext>Digital photography - compensate for noisy sensors.Code breakingCode makingtelecommunicationsvideo compressionI see some really interesting products coming down the line .</tokentext>
<sentencetext>Digital photography - compensate for noisy sensors.Code breakingCode makingtelecommunicationsvideo compressionI see some really interesting products coming down the line.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329462</id>
	<title>Re:Why not...</title>
	<author>Anonymous</author>
	<datestamp>1267543560000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><p>You have to understand that the digital camera example is a toy example, i.e., the theory works beautifully but it has little use in practice (in this particular configuration). The other example that is mentioned in the article (MRI) better showcases the advantages of CS. When it takes about 200s to take a full acquisition of the image, you can take much fewer measurements in ~40s and then reconstruct the image using a CS algorithm. There are other examples where using CS brings similar advantages in practice; mostly when acquiring a single measurement is either expensive or takes a long time.</p></htmltext>
<tokenext>You have to understand that the digital camera example is a toy example , i.e. , the theory works beautifully but it has little use in practice ( in this particular configuration ) .
The other example that is mentioned in the article ( MRI ) better showcases the advantages of CS .
When it takes about 200s to take a full acquisition of the image , you can take much fewer measurements in ~ 40s and then reconstruct the image using a CS algorithm .
There are other examples where using CS brings similar advantages in practice ; mostly when acquiring a single measurement is either expensive or takes a long time .</tokentext>
<sentencetext>You have to understand that the digital camera example is a toy example, i.e., the theory works beautifully but it has little use in practice (in this particular configuration).
The other example that is mentioned in the article (MRI) better showcases the advantages of CS.
When it takes about 200s to take a full acquisition of the image, you can take much fewer measurements in ~40s and then reconstruct the image using a CS algorithm.
There are other examples where using CS brings similar advantages in practice; mostly when acquiring a single measurement is either expensive or takes a long time.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328778</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329158</id>
	<title>Re:Questions...</title>
	<author>azaris</author>
	<datestamp>1267541940000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>Does this only apply to image data, or will we be able to use this to clean up other databases? Will it work with sampled sounds? Names and addresses and inventory?</p></div><p>Of course not. It's not magic. There are certain assumptions that can be made about most real-life images, mainly that they have small total variance. That means they have large areas of near-constant intensity/color distribution separated by interfaces with large jumps (like a cartoon image would have).

</p><p>Though this method uses the l\_1 norm and not total variation.</p><p><div class="quote"><p>More importantly, HOW does it work?</p></div><p>See <a href="http://arxiv.org/abs/math.CA/0410542" title="arxiv.org">here</a> [arxiv.org].</p></div>
	</htmltext>
<tokenext>Does this only apply to image data , or will we be able to use this to clean up other databases ?
Will it work with sampled sounds ?
Names and addresses and inventory ? Of course not .
It 's not magic .
There are certain assumptions that can be made about most real-life images , mainly that they have small total variance .
That means they have large areas of near-constant intensity/color distribution separated by interfaces with large jumps ( like a cartoon image would have ) .
Though this method uses the l \ _1 norm and not total variation.More importantly , HOW does it work ? See here [ arxiv.org ] .</tokentext>
<sentencetext>Does this only apply to image data, or will we be able to use this to clean up other databases?
Will it work with sampled sounds?
Names and addresses and inventory?Of course not.
It's not magic.
There are certain assumptions that can be made about most real-life images, mainly that they have small total variance.
That means they have large areas of near-constant intensity/color distribution separated by interfaces with large jumps (like a cartoon image would have).
Though this method uses the l\_1 norm and not total variation.More importantly, HOW does it work?See here [arxiv.org].
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328944</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329120</id>
	<title>I could do this in PhotoShop.</title>
	<author>jellomizer</author>
	<datestamp>1267541760000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext><p>After applying the Noise filter to mess up my image I hit Undo and my image is back to normal.</p></htmltext>
<tokenext>After applying the Noise filter to mess up my image I hit Undo and my image is back to normal .</tokentext>
<sentencetext>After applying the Noise filter to mess up my image I hit Undo and my image is back to normal.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329792</id>
	<title>Re:I am a bit worried about the "fill in the shape</title>
	<author>rickyars</author>
	<datestamp>1267545060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>i agree, the description of the algorithm is too vague to really understand what is going on.</p><p>30 seconds of googling turned up this brief lecture on compressed sensing. written for undergrads, "the prerequisites for understanding this lecture note material are linear algebra, basic optimization, and basic probability."<br><a href="http://dsp.rice.edu/sites/dsp.rice.edu/files/cs/baraniukCSlecture07.pdf" title="rice.edu" rel="nofollow">http://dsp.rice.edu/sites/dsp.rice.edu/files/cs/baraniukCSlecture07.pdf</a> [rice.edu]</p><p>side note: rich baraniuk was one of the best professors i had in undergrad</p></htmltext>
<tokenext>i agree , the description of the algorithm is too vague to really understand what is going on.30 seconds of googling turned up this brief lecture on compressed sensing .
written for undergrads , " the prerequisites for understanding this lecture note material are linear algebra , basic optimization , and basic probability .
" http : //dsp.rice.edu/sites/dsp.rice.edu/files/cs/baraniukCSlecture07.pdf [ rice.edu ] side note : rich baraniuk was one of the best professors i had in undergrad</tokentext>
<sentencetext>i agree, the description of the algorithm is too vague to really understand what is going on.30 seconds of googling turned up this brief lecture on compressed sensing.
written for undergrads, "the prerequisites for understanding this lecture note material are linear algebra, basic optimization, and basic probability.
"http://dsp.rice.edu/sites/dsp.rice.edu/files/cs/baraniukCSlecture07.pdf [rice.edu]side note: rich baraniuk was one of the best professors i had in undergrad</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329206</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329206</id>
	<title>Re:I am a bit worried about the "fill in the shape</title>
	<author>ceoyoyo</author>
	<datestamp>1267542240000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>The description of the algorithm in the article is quite poor.  To reconstruct an MR image you effectively model it with wavelet basis functions, subject to someconstraints: a) the wavelet domain should be as sparse as possible, b) the Fourier coefficients you actually acquired (MR is acquired in the Fourier domain, not the image domain) have to match and usually c) the image should be real.  You often also require that the total variation of the image should be as low as possible as well.</p><p>Since the image is acquired in the Fourier domain, every measurement you make contains information about all the pixels in the image.  For reasonable* under acquisitions CS can produce a perfectly reconstructed image.</p><p>* the exact limits of "reasonable" are still under investigation, but typically you only need to acquire about a quarter of the data to be pretty much guaranteed you'll be able to get a perfect reconstruction.</p></htmltext>
<tokenext>The description of the algorithm in the article is quite poor .
To reconstruct an MR image you effectively model it with wavelet basis functions , subject to someconstraints : a ) the wavelet domain should be as sparse as possible , b ) the Fourier coefficients you actually acquired ( MR is acquired in the Fourier domain , not the image domain ) have to match and usually c ) the image should be real .
You often also require that the total variation of the image should be as low as possible as well.Since the image is acquired in the Fourier domain , every measurement you make contains information about all the pixels in the image .
For reasonable * under acquisitions CS can produce a perfectly reconstructed image .
* the exact limits of " reasonable " are still under investigation , but typically you only need to acquire about a quarter of the data to be pretty much guaranteed you 'll be able to get a perfect reconstruction .</tokentext>
<sentencetext>The description of the algorithm in the article is quite poor.
To reconstruct an MR image you effectively model it with wavelet basis functions, subject to someconstraints: a) the wavelet domain should be as sparse as possible, b) the Fourier coefficients you actually acquired (MR is acquired in the Fourier domain, not the image domain) have to match and usually c) the image should be real.
You often also require that the total variation of the image should be as low as possible as well.Since the image is acquired in the Fourier domain, every measurement you make contains information about all the pixels in the image.
For reasonable* under acquisitions CS can produce a perfectly reconstructed image.
* the exact limits of "reasonable" are still under investigation, but typically you only need to acquire about a quarter of the data to be pretty much guaranteed you'll be able to get a perfect reconstruction.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328826</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31341360</id>
	<title>Re:Why not...</title>
	<author>complete loony</author>
	<datestamp>1267559700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>And of course this plugin would work best if it knew the raw file format and exact colour pixel layout of the CCD in your camera. You should then be able to use the different RGB sub-pixel values, and their positions to build a far more detailed image.</htmltext>
<tokenext>And of course this plugin would work best if it knew the raw file format and exact colour pixel layout of the CCD in your camera .
You should then be able to use the different RGB sub-pixel values , and their positions to build a far more detailed image .</tokentext>
<sentencetext>And of course this plugin would work best if it knew the raw file format and exact colour pixel layout of the CCD in your camera.
You should then be able to use the different RGB sub-pixel values, and their positions to build a far more detailed image.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329526</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31337328</id>
	<title>Re:Military applications</title>
	<author>icegreentea</author>
	<datestamp>1267530840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>When subs (at least US subs) surface to talk, they use highly directional satellite links. You pretty much have to position yourself between the sub and the satellite to pick up the transmission. They also like to use burst transmission for as much stuff as they can. Being short in time makes it pretty tough to pick out too.</htmltext>
<tokenext>When subs ( at least US subs ) surface to talk , they use highly directional satellite links .
You pretty much have to position yourself between the sub and the satellite to pick up the transmission .
They also like to use burst transmission for as much stuff as they can .
Being short in time makes it pretty tough to pick out too .</tokentext>
<sentencetext>When subs (at least US subs) surface to talk, they use highly directional satellite links.
You pretty much have to position yourself between the sub and the satellite to pick up the transmission.
They also like to use burst transmission for as much stuff as they can.
Being short in time makes it pretty tough to pick out too.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328842</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31330068</id>
	<title>Re:CSI</title>
	<author>Anonymous</author>
	<datestamp>1267546500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I just wrote a post why compressed sensing is not CSI technology (yet!)

<a href="http://nuit-blanche.blogspot.com/2010/03/why-compressed-sensing-is-not-csi.html" title="blogspot.com" rel="nofollow">http://nuit-blanche.blogspot.com/2010/03/why-compressed-sensing-is-not-csi.html</a> [blogspot.com]</htmltext>
<tokenext>I just wrote a post why compressed sensing is not CSI technology ( yet !
) http : //nuit-blanche.blogspot.com/2010/03/why-compressed-sensing-is-not-csi.html [ blogspot.com ]</tokentext>
<sentencetext>I just wrote a post why compressed sensing is not CSI technology (yet!
)

http://nuit-blanche.blogspot.com/2010/03/why-compressed-sensing-is-not-csi.html [blogspot.com]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328740</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31352896</id>
	<title>Re:I am a bit worried about the "fill in the shape</title>
	<author>daver00</author>
	<datestamp>1267623660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The algorithm does not work at all in the way that the wired article describes. In CS you make the assumption that your unknown data set is sparse, it is now known that a random sample of a sparse data set contains all the information about the sparsity of that data set. From here you seek the most sparse data set which agrees with your sample, and it will be the exact solution provided all your assumptions are true and your sampled data is perfect. If your assumptions are nearly true and your sampled data is nearly perfect, then you will recreate very nearly the exact data set.</p><p>If you want to know the 'algorithm' used in CS is probably some variant of the simplex algorithm, or some interior point method for solving convex optimisation problems.</p></htmltext>
<tokenext>The algorithm does not work at all in the way that the wired article describes .
In CS you make the assumption that your unknown data set is sparse , it is now known that a random sample of a sparse data set contains all the information about the sparsity of that data set .
From here you seek the most sparse data set which agrees with your sample , and it will be the exact solution provided all your assumptions are true and your sampled data is perfect .
If your assumptions are nearly true and your sampled data is nearly perfect , then you will recreate very nearly the exact data set.If you want to know the 'algorithm ' used in CS is probably some variant of the simplex algorithm , or some interior point method for solving convex optimisation problems .</tokentext>
<sentencetext>The algorithm does not work at all in the way that the wired article describes.
In CS you make the assumption that your unknown data set is sparse, it is now known that a random sample of a sparse data set contains all the information about the sparsity of that data set.
From here you seek the most sparse data set which agrees with your sample, and it will be the exact solution provided all your assumptions are true and your sampled data is perfect.
If your assumptions are nearly true and your sampled data is nearly perfect, then you will recreate very nearly the exact data set.If you want to know the 'algorithm' used in CS is probably some variant of the simplex algorithm, or some interior point method for solving convex optimisation problems.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328826</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329642</id>
	<title>Magic (BS)</title>
	<author>Ractive</author>
	<datestamp>1267544340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I've been working with digital images for a long time and I can tell you this:  this is too good to be true <br>
You can't get professional results even when trying to interpolate 5\% extra data, and even though I guess this is not oriented to professional quality images, it will just make crappy images good enough to recognize the points of interest, it will be acceptable to that point but then there's the Obama sample, I have seen the printed image (in the dead tree version of the mag) and it certanly looks faked, there's some detail that couldn't have beeen retrieved, not with the current algorithms, actually as some have pointed out, the lapel pin data is not present at all  so how could you recreate that, sounds to me like something more from the realm of magic than math, hence fake!</htmltext>
<tokenext>I 've been working with digital images for a long time and I can tell you this : this is too good to be true You ca n't get professional results even when trying to interpolate 5 \ % extra data , and even though I guess this is not oriented to professional quality images , it will just make crappy images good enough to recognize the points of interest , it will be acceptable to that point but then there 's the Obama sample , I have seen the printed image ( in the dead tree version of the mag ) and it certanly looks faked , there 's some detail that could n't have beeen retrieved , not with the current algorithms , actually as some have pointed out , the lapel pin data is not present at all so how could you recreate that , sounds to me like something more from the realm of magic than math , hence fake !</tokentext>
<sentencetext>I've been working with digital images for a long time and I can tell you this:  this is too good to be true 
You can't get professional results even when trying to interpolate 5\% extra data, and even though I guess this is not oriented to professional quality images, it will just make crappy images good enough to recognize the points of interest, it will be acceptable to that point but then there's the Obama sample, I have seen the printed image (in the dead tree version of the mag) and it certanly looks faked, there's some detail that couldn't have beeen retrieved, not with the current algorithms, actually as some have pointed out, the lapel pin data is not present at all  so how could you recreate that, sounds to me like something more from the realm of magic than math, hence fake!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329582</id>
	<title>yes, but...</title>
	<author>Anonymous</author>
	<datestamp>1267544100000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>"If it sees four adjacent green pixels, it may add a green rectangle there."</p><p>Which is brilliant...unless the tumor you were looking for is a white dot in the middle of those 4 pixels.  Now it's all just a smooth green field.</p></htmltext>
<tokenext>" If it sees four adjacent green pixels , it may add a green rectangle there .
" Which is brilliant...unless the tumor you were looking for is a white dot in the middle of those 4 pixels .
Now it 's all just a smooth green field .</tokentext>
<sentencetext>"If it sees four adjacent green pixels, it may add a green rectangle there.
"Which is brilliant...unless the tumor you were looking for is a white dot in the middle of those 4 pixels.
Now it's all just a smooth green field.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31342798</id>
	<title>Re:Why not...</title>
	<author>miggyb</author>
	<datestamp>1267614960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>...imagine a 1000x1000 pixel image with 24 bit color. There are 24 ^ 1000000 unique pixel configurations to fill that image....</p></div><p>My brain had a buffer overflow. Can I imagine a smaller image, say 10x10 pixels, 256 colors?</p></div>
	</htmltext>
<tokenext>...imagine a 1000x1000 pixel image with 24 bit color .
There are 24 ^ 1000000 unique pixel configurations to fill that image....My brain had a buffer overflow .
Can I imagine a smaller image , say 10x10 pixels , 256 colors ?</tokentext>
<sentencetext>...imagine a 1000x1000 pixel image with 24 bit color.
There are 24 ^ 1000000 unique pixel configurations to fill that image....My brain had a buffer overflow.
Can I imagine a smaller image, say 10x10 pixels, 256 colors?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329628</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31333188</id>
	<title>From a Lo-Res Capture...</title>
	<author>Anonymous</author>
	<datestamp>1267558740000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>.. to a Hi-Res Fiction. If there's no data, there's no data.</p><p>What if that black pixel over Obama's right shoulder that got "enhanced" to be a white pixel was actually an assassin with a high powered rifle standing 1000' in the background?</p><p>What if that dot on the map that gets turned from white to brown was actually a missile silo and not dirt?</p><p>I don't see much use for this other than in special cases, and maybe games, where you need stuff to look real, but not be real.</p></htmltext>
<tokenext>.. to a Hi-Res Fiction .
If there 's no data , there 's no data.What if that black pixel over Obama 's right shoulder that got " enhanced " to be a white pixel was actually an assassin with a high powered rifle standing 1000 ' in the background ? What if that dot on the map that gets turned from white to brown was actually a missile silo and not dirt ? I do n't see much use for this other than in special cases , and maybe games , where you need stuff to look real , but not be real .</tokentext>
<sentencetext>.. to a Hi-Res Fiction.
If there's no data, there's no data.What if that black pixel over Obama's right shoulder that got "enhanced" to be a white pixel was actually an assassin with a high powered rifle standing 1000' in the background?What if that dot on the map that gets turned from white to brown was actually a missile silo and not dirt?I don't see much use for this other than in special cases, and maybe games, where you need stuff to look real, but not be real.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329238</id>
	<title>Wrong.</title>
	<author>Hurricane78</author>
	<datestamp>1267542420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>These are fancy words, for what is nothing else that automated educated guessing. (And re-vectorization.)</p><p>Yes, you can guess that a round shape is round, even when a couple of pixels are missing. But you can not guess that one of these missing pixels actually was a dent. So this mechanism here would still make that dent vanish. Just in a less-obvious way. (Which can be very bad, if that dent was critical.)</p><p>Essentially if you have a lossy process, you are always going to have a lack of details, and that&rsquo;s not going to change.<br>Just that this process does to images when compared to e.g JPEG, what MP3 does to music when compared to analog recordings.</p><p>In analog recordings, loss is audible noise. In MP3 it&rsquo;s the opposite. Usually mostly not audible, but still missing.<br>In JPEG, loss is visible artifacts. In this method it&rsquo;s the opposite. Usually mostly not visible, but still missing.</p></htmltext>
<tokenext>These are fancy words , for what is nothing else that automated educated guessing .
( And re-vectorization .
) Yes , you can guess that a round shape is round , even when a couple of pixels are missing .
But you can not guess that one of these missing pixels actually was a dent .
So this mechanism here would still make that dent vanish .
Just in a less-obvious way .
( Which can be very bad , if that dent was critical .
) Essentially if you have a lossy process , you are always going to have a lack of details , and that    s not going to change.Just that this process does to images when compared to e.g JPEG , what MP3 does to music when compared to analog recordings.In analog recordings , loss is audible noise .
In MP3 it    s the opposite .
Usually mostly not audible , but still missing.In JPEG , loss is visible artifacts .
In this method it    s the opposite .
Usually mostly not visible , but still missing .</tokentext>
<sentencetext>These are fancy words, for what is nothing else that automated educated guessing.
(And re-vectorization.
)Yes, you can guess that a round shape is round, even when a couple of pixels are missing.
But you can not guess that one of these missing pixels actually was a dent.
So this mechanism here would still make that dent vanish.
Just in a less-obvious way.
(Which can be very bad, if that dent was critical.
)Essentially if you have a lossy process, you are always going to have a lack of details, and that’s not going to change.Just that this process does to images when compared to e.g JPEG, what MP3 does to music when compared to analog recordings.In analog recordings, loss is audible noise.
In MP3 it’s the opposite.
Usually mostly not audible, but still missing.In JPEG, loss is visible artifacts.
In this method it’s the opposite.
Usually mostly not visible, but still missing.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329020</id>
	<title>What if you feed it noise?</title>
	<author>Anonymous</author>
	<datestamp>1267541160000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>So, if I feed this algorithm an image that actually IS noise, what do I get?</p><p>Pictures of angels?</p></htmltext>
<tokenext>So , if I feed this algorithm an image that actually IS noise , what do I get ? Pictures of angels ?</tokentext>
<sentencetext>So, if I feed this algorithm an image that actually IS noise, what do I get?Pictures of angels?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329596</id>
	<title>This is an important tool!</title>
	<author>natehoy</author>
	<datestamp>1267544220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I can finally stop reading the articles and the summaries, and apply this algorithm to the first post to understand the article instead.  What a time saver!</p></htmltext>
<tokenext>I can finally stop reading the articles and the summaries , and apply this algorithm to the first post to understand the article instead .
What a time saver !</tokentext>
<sentencetext>I can finally stop reading the articles and the summaries, and apply this algorithm to the first post to understand the article instead.
What a time saver!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31333232</id>
	<title>Re:What if you feed it noise?</title>
	<author>NeoSkandranon</author>
	<datestamp>1267558920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>the face of yaweh. Careful...</p></htmltext>
<tokenext>the face of yaweh .
Careful.. .</tokentext>
<sentencetext>the face of yaweh.
Careful...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329020</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31334634</id>
	<title>Re:Holy Bad Acronym Batman</title>
	<author>ergean</author>
	<datestamp>1267520940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>CS  -  The inventor of Counter-Strike!!!</p></htmltext>
<tokenext>CS - The inventor of Counter-Strike ! !
!</tokentext>
<sentencetext>CS  -  The inventor of Counter-Strike!!
!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329140</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31330632</id>
	<title>Re:Demo image</title>
	<author>l00sr</author>
	<datestamp>1267548840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>For real images created using compressed sensing, check out Rice's <a href="http://dsp.rice.edu/cscamera" title="rice.edu">one-pixel camera</a> [rice.edu].</p></htmltext>
<tokenext>For real images created using compressed sensing , check out Rice 's one-pixel camera [ rice.edu ] .</tokentext>
<sentencetext>For real images created using compressed sensing, check out Rice's one-pixel camera [rice.edu].</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328864</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31335726</id>
	<title>Re:CSI</title>
	<author>Annymouse Cowherd</author>
	<datestamp>1267524720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Has anyone actually written a practical implementation of this in something other than MatLab?</p></htmltext>
<tokenext>Has anyone actually written a practical implementation of this in something other than MatLab ?</tokentext>
<sentencetext>Has anyone actually written a practical implementation of this in something other than MatLab?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328962</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31330450</id>
	<title>Re:Wrong.</title>
	<author>Anonymous</author>
	<datestamp>1267548180000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I don't think this comment is justified.</p><p>The application noted in the Wired article is that these guys didn't have time to do a full MRI of a young boy when they wanted a high-res image of a very very small part of his body.</p><p>My doing a faster MRI and using this algorithm they were able to resolve this tiny detail.</p><p>So, no, it doesn't seem that the dents do vanish.</p><p>If you read some of the background papers on www.l1-magic.org you can see that the technique works if the original image is compressible using some transform method.  A circle with a dent in is naturally highly compressible as it contains only two features.</p><p>However, note that we're not talking about missing pixels.  We're talking about calculating various integrals over the entire image.  No pixel is missing; there are just a limited number of integrals.</p><p>So the entire sampling paradigm is totally different to what we're used to.  The consequences therefore cannot easily be derived from your classical regular-sampling Nyquist theorem stuff.</p></htmltext>
<tokenext>I do n't think this comment is justified.The application noted in the Wired article is that these guys did n't have time to do a full MRI of a young boy when they wanted a high-res image of a very very small part of his body.My doing a faster MRI and using this algorithm they were able to resolve this tiny detail.So , no , it does n't seem that the dents do vanish.If you read some of the background papers on www.l1-magic.org you can see that the technique works if the original image is compressible using some transform method .
A circle with a dent in is naturally highly compressible as it contains only two features.However , note that we 're not talking about missing pixels .
We 're talking about calculating various integrals over the entire image .
No pixel is missing ; there are just a limited number of integrals.So the entire sampling paradigm is totally different to what we 're used to .
The consequences therefore can not easily be derived from your classical regular-sampling Nyquist theorem stuff .</tokentext>
<sentencetext>I don't think this comment is justified.The application noted in the Wired article is that these guys didn't have time to do a full MRI of a young boy when they wanted a high-res image of a very very small part of his body.My doing a faster MRI and using this algorithm they were able to resolve this tiny detail.So, no, it doesn't seem that the dents do vanish.If you read some of the background papers on www.l1-magic.org you can see that the technique works if the original image is compressible using some transform method.
A circle with a dent in is naturally highly compressible as it contains only two features.However, note that we're not talking about missing pixels.
We're talking about calculating various integrals over the entire image.
No pixel is missing; there are just a limited number of integrals.So the entire sampling paradigm is totally different to what we're used to.
The consequences therefore cannot easily be derived from your classical regular-sampling Nyquist theorem stuff.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329238</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328826</id>
	<title>I am a bit worried about the "fill in the shapes"</title>
	<author>Anonymous</author>
	<datestamp>1267539900000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext>From TFA <p><div class="quote"><p>The algorithm then begins to modify the picture in stages by laying colored shapes over the randomly selected image. The goal is to seek what&rsquo;s called sparsity, a measure of image simplicity.</p></div><p>The thing is in a medical image couldn't that actually remove a small growth or lesion? I know the article says:</p><p><div class="quote"><p>That image isn&rsquo;t absolutely guaranteed to be the sparsest one or the exact image you were trying to reconstruct, but Cand&#232;s and Tao have shown mathematically that the chance of its being wrong is infinitesimally small.</p></div><p> but how often do analysis like this make assumptions about the data, like you are unlikely to get a small disruption in a regular shape and if you do it is not significant.
</p><p>
on the bright side, when Moore's law allows real-time processing we can look forward to night vision cameras which really are "as good as daylight", and for this sort of application the odd distortion really won't matter so much.</p></div>
	</htmltext>
<tokenext>From TFA The algorithm then begins to modify the picture in stages by laying colored shapes over the randomly selected image .
The goal is to seek what    s called sparsity , a measure of image simplicity.The thing is in a medical image could n't that actually remove a small growth or lesion ?
I know the article says : That image isn    t absolutely guaranteed to be the sparsest one or the exact image you were trying to reconstruct , but Cand   s and Tao have shown mathematically that the chance of its being wrong is infinitesimally small .
but how often do analysis like this make assumptions about the data , like you are unlikely to get a small disruption in a regular shape and if you do it is not significant .
on the bright side , when Moore 's law allows real-time processing we can look forward to night vision cameras which really are " as good as daylight " , and for this sort of application the odd distortion really wo n't matter so much .</tokentext>
<sentencetext>From TFA The algorithm then begins to modify the picture in stages by laying colored shapes over the randomly selected image.
The goal is to seek what’s called sparsity, a measure of image simplicity.The thing is in a medical image couldn't that actually remove a small growth or lesion?
I know the article says:That image isn’t absolutely guaranteed to be the sparsest one or the exact image you were trying to reconstruct, but Candès and Tao have shown mathematically that the chance of its being wrong is infinitesimally small.
but how often do analysis like this make assumptions about the data, like you are unlikely to get a small disruption in a regular shape and if you do it is not significant.
on the bright side, when Moore's law allows real-time processing we can look forward to night vision cameras which really are "as good as daylight", and for this sort of application the odd distortion really won't matter so much.
	</sentencetext>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329430
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329060
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329692
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328826
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329090
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328826
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31331624
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328740
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329884
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328824
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329510
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328826
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31330698
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328842
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329402
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328962
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328740
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31341360
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329526
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328888
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328778
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328856
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328778
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31330530
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328740
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_36</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329576
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328944
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329158
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328944
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_38</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31330682
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328992
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328888
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328778
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329902
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328888
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328778
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329098
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328826
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_39</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329802
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328778
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31333232
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329020
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31352896
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328826
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_40</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31339676
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329020
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31333686
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329526
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328888
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328778
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31339868
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328740
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329442
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329140
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31335726
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328962
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328740
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31330514
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328778
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329898
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328826
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31330068
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328740
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329188
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328740
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329548
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328864
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31331678
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329238
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329252
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328740
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31334634
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329140
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329792
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329206
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328826
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31330450
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329238
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31333310
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328842
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_35</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31342798
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329628
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328778
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_37</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31330632
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328864
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31337328
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328842
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329462
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328778
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329482
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328740
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_02_0242224_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31330608
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329346
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329140
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_02_0242224.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328778
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31330514
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329462
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329628
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31342798
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329802
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328856
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328888
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329526
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31341360
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31333686
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328992
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31330682
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329902
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_02_0242224.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329642
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_02_0242224.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328944
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329576
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329158
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_02_0242224.17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329856
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_02_0242224.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328830
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_02_0242224.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329830
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_02_0242224.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329202
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_02_0242224.16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328854
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_02_0242224.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329140
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31334634
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329346
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31330608
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329442
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_02_0242224.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328740
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31339868
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31331624
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31330068
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329482
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31330530
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328962
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329402
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31335726
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329188
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329252
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_02_0242224.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329238
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31330450
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31331678
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_02_0242224.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328792
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_02_0242224.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329060
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329430
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_02_0242224.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328842
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31337328
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31330698
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31333310
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_02_0242224.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328824
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329884
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_02_0242224.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329020
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31339676
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31333232
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_02_0242224.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328826
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329206
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329792
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329090
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329692
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31352896
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329898
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329098
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329510
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_02_0242224.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31328864
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31329548
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_02_0242224.31330632
</commentlist>
</conversation>
