<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_07_16_2154238</id>
	<title>Choosing Better-Quality JPEG Images With Software?</title>
	<author>timothy</author>
	<datestamp>1247738520000</datestamp>
	<htmltext><a href="mailto:Ken.Poole@shaPOLLOCKw.caminuspainter" rel="nofollow">kpoole55</a> writes <i>"I've been googling for an answer to a question and I'm not making much progress. The problem is image collections, and finding the better of near-duplicate images.  There are many programs, free and costly, CLI or GUI oriented, for finding visually similar images &mdash; but I'm looking for a next step in the process.  It's known that saving the same source image in JPEG format at different quality levels produces different images, the one at the lower quality having more JPEG artifacts. I've been trying to find a method to compare two visually similar JPEG images and select the one with the fewest JPEG artifacts (or the one with the most JPEG artifacts, either will serve.) I also suspect that this is going to be one of those 'Well, of course, how else would you do it?  It's so simple.' moments."</i></htmltext>
<tokenext>kpoole55 writes " I 've been googling for an answer to a question and I 'm not making much progress .
The problem is image collections , and finding the better of near-duplicate images .
There are many programs , free and costly , CLI or GUI oriented , for finding visually similar images    but I 'm looking for a next step in the process .
It 's known that saving the same source image in JPEG format at different quality levels produces different images , the one at the lower quality having more JPEG artifacts .
I 've been trying to find a method to compare two visually similar JPEG images and select the one with the fewest JPEG artifacts ( or the one with the most JPEG artifacts , either will serve .
) I also suspect that this is going to be one of those 'Well , of course , how else would you do it ?
It 's so simple .
' moments .
"</tokentext>
<sentencetext>kpoole55 writes "I've been googling for an answer to a question and I'm not making much progress.
The problem is image collections, and finding the better of near-duplicate images.
There are many programs, free and costly, CLI or GUI oriented, for finding visually similar images — but I'm looking for a next step in the process.
It's known that saving the same source image in JPEG format at different quality levels produces different images, the one at the lower quality having more JPEG artifacts.
I've been trying to find a method to compare two visually similar JPEG images and select the one with the fewest JPEG artifacts (or the one with the most JPEG artifacts, either will serve.
) I also suspect that this is going to be one of those 'Well, of course, how else would you do it?
It's so simple.
' moments.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724141</id>
	<title>Re:File size</title>
	<author>kernelphr34k</author>
	<datestamp>1247745240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>jpeg is a lossless format anyways... JPEG suck if you want to keep quality.. Everytime you open and save the image it looses quality.

Use a PNG, or TIF for a better quality image that you can open and save many times without loosing quality.</htmltext>
<tokenext>jpeg is a lossless format anyways... JPEG suck if you want to keep quality.. Everytime you open and save the image it looses quality .
Use a PNG , or TIF for a better quality image that you can open and save many times without loosing quality .</tokentext>
<sentencetext>jpeg is a lossless format anyways... JPEG suck if you want to keep quality.. Everytime you open and save the image it looses quality.
Use a PNG, or TIF for a better quality image that you can open and save many times without loosing quality.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724087</id>
	<title>Look for boundries</title>
	<author>Anonymous</author>
	<datestamp>1247745000000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>JPEG compression averages groups of pixels with similar color data inside the JPEG image, but does not weigh that average against nearby pixel groups. You can use this fact to identify JPEG artifacts, even if the edges between artifacts is not visible to human eyes.</p><p>EG, in a patch of sky, which has a fairly random, but otherwise uniform distribution of shades of blue, there will emerge "squares" where the averaging algorithm has averaged a pixel group, but did not weigh the average of adjacent groups, resulting in a visually identifiable artifact.</p><p>You can gauge the quality of a compressed JPEG image by testing for discrete boundries in areas of similar color values that would nominally contain a random (or smooth gradient with random dither) aggregation of similar color types, and assinging a "Severity" value based on the 'hardness' of the artifact's differnce to it's neighbors.</p><p>In other words, in areas that would have originally had a nice "smooth" blending of similar colors, you will end up with blocks of discrete colors that have discernable edges. The severity of artifacting would be determinable by measuring how far discretely unique each artifact block is from it's neighbors, (with caveats to natural boundries- such as sky against tree, etc.)</p><p>To evaluate if an edge is a JPEG artifact or not, you should gather the JPEG pixel group size from the JPEG header, then see if your edges form a rectangle that is a multiple of that size.</p><p>This way you can tell if the hard edge is an artifact, or if it is the edge of Paris Hilton's nipple (or some other natural edge. Natural edges will very rarely have a mathematically perfect rectangular profile.)</p><p>A systematic evaluation of an image would be slow and painful, but would produce a scoring benchmark to rate two arbitrary JPEGs against each other. (Better would, of course, be 2 JPEGS and a lossless PNG-- that way you have the un-averaged data to help identify artifact boundries with, among other things, but that isnt what you asked for.)</p></htmltext>
<tokenext>JPEG compression averages groups of pixels with similar color data inside the JPEG image , but does not weigh that average against nearby pixel groups .
You can use this fact to identify JPEG artifacts , even if the edges between artifacts is not visible to human eyes.EG , in a patch of sky , which has a fairly random , but otherwise uniform distribution of shades of blue , there will emerge " squares " where the averaging algorithm has averaged a pixel group , but did not weigh the average of adjacent groups , resulting in a visually identifiable artifact.You can gauge the quality of a compressed JPEG image by testing for discrete boundries in areas of similar color values that would nominally contain a random ( or smooth gradient with random dither ) aggregation of similar color types , and assinging a " Severity " value based on the 'hardness ' of the artifact 's differnce to it 's neighbors.In other words , in areas that would have originally had a nice " smooth " blending of similar colors , you will end up with blocks of discrete colors that have discernable edges .
The severity of artifacting would be determinable by measuring how far discretely unique each artifact block is from it 's neighbors , ( with caveats to natural boundries- such as sky against tree , etc .
) To evaluate if an edge is a JPEG artifact or not , you should gather the JPEG pixel group size from the JPEG header , then see if your edges form a rectangle that is a multiple of that size.This way you can tell if the hard edge is an artifact , or if it is the edge of Paris Hilton 's nipple ( or some other natural edge .
Natural edges will very rarely have a mathematically perfect rectangular profile .
) A systematic evaluation of an image would be slow and painful , but would produce a scoring benchmark to rate two arbitrary JPEGs against each other .
( Better would , of course , be 2 JPEGS and a lossless PNG-- that way you have the un-averaged data to help identify artifact boundries with , among other things , but that isnt what you asked for .
)</tokentext>
<sentencetext>JPEG compression averages groups of pixels with similar color data inside the JPEG image, but does not weigh that average against nearby pixel groups.
You can use this fact to identify JPEG artifacts, even if the edges between artifacts is not visible to human eyes.EG, in a patch of sky, which has a fairly random, but otherwise uniform distribution of shades of blue, there will emerge "squares" where the averaging algorithm has averaged a pixel group, but did not weigh the average of adjacent groups, resulting in a visually identifiable artifact.You can gauge the quality of a compressed JPEG image by testing for discrete boundries in areas of similar color values that would nominally contain a random (or smooth gradient with random dither) aggregation of similar color types, and assinging a "Severity" value based on the 'hardness' of the artifact's differnce to it's neighbors.In other words, in areas that would have originally had a nice "smooth" blending of similar colors, you will end up with blocks of discrete colors that have discernable edges.
The severity of artifacting would be determinable by measuring how far discretely unique each artifact block is from it's neighbors, (with caveats to natural boundries- such as sky against tree, etc.
)To evaluate if an edge is a JPEG artifact or not, you should gather the JPEG pixel group size from the JPEG header, then see if your edges form a rectangle that is a multiple of that size.This way you can tell if the hard edge is an artifact, or if it is the edge of Paris Hilton's nipple (or some other natural edge.
Natural edges will very rarely have a mathematically perfect rectangular profile.
)A systematic evaluation of an image would be slow and painful, but would produce a scoring benchmark to rate two arbitrary JPEGs against each other.
(Better would, of course, be 2 JPEGS and a lossless PNG-- that way you have the un-averaged data to help identify artifact boundries with, among other things, but that isnt what you asked for.
)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724165</id>
	<title>Automatic JPEG Artifact Removal</title>
	<author>yet-another-lobbyist</author>
	<datestamp>1247745300000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext>For what it's worth: I remember using Paint Shop Pro 9 a few years ago. It has a function called "Removal of JPEG artifacts" (or similar). I remember being surprised how well it worked. I also remember that PSP has quite good functionality for batch processing. So what you could do is use the "remove artifact" function and look at the difference before/after this function. The image with the bigger difference has to be the one of lower quality. <br>
I am not sure if there is a tool that automatically calculates the difference between two images, but this is a task simple enough to be coded in a few lines (given the right libraries are at hand). For each color channel (RGB) of each pixel, you basically just calculate the square of the difference between the two images. Then you add all these numbers up (all pixels, all color channels). The bigger this number is, the bigger the difference between the images. <br>
Maybe not your push-one-button solution, but should be doable. Just my $0.02.</htmltext>
<tokenext>For what it 's worth : I remember using Paint Shop Pro 9 a few years ago .
It has a function called " Removal of JPEG artifacts " ( or similar ) .
I remember being surprised how well it worked .
I also remember that PSP has quite good functionality for batch processing .
So what you could do is use the " remove artifact " function and look at the difference before/after this function .
The image with the bigger difference has to be the one of lower quality .
I am not sure if there is a tool that automatically calculates the difference between two images , but this is a task simple enough to be coded in a few lines ( given the right libraries are at hand ) .
For each color channel ( RGB ) of each pixel , you basically just calculate the square of the difference between the two images .
Then you add all these numbers up ( all pixels , all color channels ) .
The bigger this number is , the bigger the difference between the images .
Maybe not your push-one-button solution , but should be doable .
Just my $ 0.02 .</tokentext>
<sentencetext>For what it's worth: I remember using Paint Shop Pro 9 a few years ago.
It has a function called "Removal of JPEG artifacts" (or similar).
I remember being surprised how well it worked.
I also remember that PSP has quite good functionality for batch processing.
So what you could do is use the "remove artifact" function and look at the difference before/after this function.
The image with the bigger difference has to be the one of lower quality.
I am not sure if there is a tool that automatically calculates the difference between two images, but this is a task simple enough to be coded in a few lines (given the right libraries are at hand).
For each color channel (RGB) of each pixel, you basically just calculate the square of the difference between the two images.
Then you add all these numbers up (all pixels, all color channels).
The bigger this number is, the bigger the difference between the images.
Maybe not your push-one-button solution, but should be doable.
Just my $0.02.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725017</id>
	<title>Some things aren't doable yet</title>
	<author>PingXao</author>
	<datestamp>1247751840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Aside from the mathematical tests some have suggested, my gut tells me this is going to be almost impossible.  There are tasks that a human can perform that just aren't doable given the present state of our software systems.  The gap has as much to do with our understanding about how we perceive through our senses as it does with algorithms and calculation methodologies.  We just don't know yet enough about the underlying processes to make a computer do it.</p><p>The same goes for other areas where AI is sorely lacking.  Things like OCR, language recognition and translation, not to mention a program where you can whistle a tune and have it analyzed to the point where its name can be deduced (if it was written already), or scored as sheet music (if you're creating something new).</p></htmltext>
<tokenext>Aside from the mathematical tests some have suggested , my gut tells me this is going to be almost impossible .
There are tasks that a human can perform that just are n't doable given the present state of our software systems .
The gap has as much to do with our understanding about how we perceive through our senses as it does with algorithms and calculation methodologies .
We just do n't know yet enough about the underlying processes to make a computer do it.The same goes for other areas where AI is sorely lacking .
Things like OCR , language recognition and translation , not to mention a program where you can whistle a tune and have it analyzed to the point where its name can be deduced ( if it was written already ) , or scored as sheet music ( if you 're creating something new ) .</tokentext>
<sentencetext>Aside from the mathematical tests some have suggested, my gut tells me this is going to be almost impossible.
There are tasks that a human can perform that just aren't doable given the present state of our software systems.
The gap has as much to do with our understanding about how we perceive through our senses as it does with algorithms and calculation methodologies.
We just don't know yet enough about the underlying processes to make a computer do it.The same goes for other areas where AI is sorely lacking.
Things like OCR, language recognition and translation, not to mention a program where you can whistle a tune and have it analyzed to the point where its name can be deduced (if it was written already), or scored as sheet music (if you're creating something new).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724501</id>
	<title>NO not file size</title>
	<author>frovingslosh</author>
	<datestamp>1247747580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>NO. Not file size. File size would be a potential test if all images were from the same original source and if they were only ever jpeg compressed once.  Unfortunately, quite often one will come across images that have been jpeg compressed and re-compressed, and the final re-compression was done at "high quality', So the file is large for the image, but it still contains all of the jpeg artifacts from the lower quality compression. You can also see extra artifacts when one file has only been compressed once but another file has been compressed repeatedly, even if the second file is the same size as the file that was only compressed once.</p><p>

There are, of course, other issues that come into question too, such as original color depth and color depth of every intermediate image. </p><p>

The poster asked a good question, but you did not provide a helpful answer.</p></htmltext>
<tokenext>NO .
Not file size .
File size would be a potential test if all images were from the same original source and if they were only ever jpeg compressed once .
Unfortunately , quite often one will come across images that have been jpeg compressed and re-compressed , and the final re-compression was done at " high quality ' , So the file is large for the image , but it still contains all of the jpeg artifacts from the lower quality compression .
You can also see extra artifacts when one file has only been compressed once but another file has been compressed repeatedly , even if the second file is the same size as the file that was only compressed once .
There are , of course , other issues that come into question too , such as original color depth and color depth of every intermediate image .
The poster asked a good question , but you did not provide a helpful answer .</tokentext>
<sentencetext>NO.
Not file size.
File size would be a potential test if all images were from the same original source and if they were only ever jpeg compressed once.
Unfortunately, quite often one will come across images that have been jpeg compressed and re-compressed, and the final re-compression was done at "high quality', So the file is large for the image, but it still contains all of the jpeg artifacts from the lower quality compression.
You can also see extra artifacts when one file has only been compressed once but another file has been compressed repeatedly, even if the second file is the same size as the file that was only compressed once.
There are, of course, other issues that come into question too, such as original color depth and color depth of every intermediate image.
The poster asked a good question, but you did not provide a helpful answer.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723555</id>
	<title>File size or density?</title>
	<author>Durandal64</author>
	<datestamp>1247742480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Have you tried just comparing the files' sizes with respect to the images' dimensions? It'll vary from encoder to encoder, but higher-quality JPEGs will be larger than lower-quality ones. You could just use the number of pixels in the picture and the file size to obtain a rough approximation of "quality per pixel" and choose the image with the higher value. It won't be perfect, but it's a lot easier than trying to pick out JPEG artifacts.
<br>
<br>
Also, the number of artifacts doesn't tell the full story. One image may have more artifacts, but those artifacts may all exist in the background parts of the image, while the foreground is less blocky. It's a choice each encoder makes.</htmltext>
<tokenext>Have you tried just comparing the files ' sizes with respect to the images ' dimensions ?
It 'll vary from encoder to encoder , but higher-quality JPEGs will be larger than lower-quality ones .
You could just use the number of pixels in the picture and the file size to obtain a rough approximation of " quality per pixel " and choose the image with the higher value .
It wo n't be perfect , but it 's a lot easier than trying to pick out JPEG artifacts .
Also , the number of artifacts does n't tell the full story .
One image may have more artifacts , but those artifacts may all exist in the background parts of the image , while the foreground is less blocky .
It 's a choice each encoder makes .</tokentext>
<sentencetext>Have you tried just comparing the files' sizes with respect to the images' dimensions?
It'll vary from encoder to encoder, but higher-quality JPEGs will be larger than lower-quality ones.
You could just use the number of pixels in the picture and the file size to obtain a rough approximation of "quality per pixel" and choose the image with the higher value.
It won't be perfect, but it's a lot easier than trying to pick out JPEG artifacts.
Also, the number of artifacts doesn't tell the full story.
One image may have more artifacts, but those artifacts may all exist in the background parts of the image, while the foreground is less blocky.
It's a choice each encoder makes.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723771</id>
	<title>Bits per pixel</title>
	<author>Citizen of Earth</author>
	<datestamp>1247743440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Compute the number of bits per pixel of the image data.</htmltext>
<tokenext>Compute the number of bits per pixel of the image data .</tokentext>
<sentencetext>Compute the number of bits per pixel of the image data.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725919</id>
	<title>Re:File size</title>
	<author>swillden</author>
	<datestamp>1247762640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>File size doesn't tell you anything.</p></div><p>I use it all the time, and it works really well.

</p><p>Sometimes when I'm trying to handhold a shot and I have to use a shutter speed that's a little too slow (meaning small shakes of my hands cause blur), I put the camera in continuous mode and mash the button for 2-3 seconds, collecting 10-15 images of almost exactly the same image -- but some of them will come out less shaky and significantly sharper than others.

</p><p>In post-processing, I could manually compare them one by one to find the sharpest, but it's much quicker and easier to look at the file sizes.  Having done this a few hundred times, I now no longer even bother examining the images visually at 1:1 zoom, because in the many that I did check carefully, file size was always an accurate indicator.  This is true with both JPEG files and CR2 (losslessly-compressed RAW files).</p></div>
	</htmltext>
<tokenext>File size does n't tell you anything.I use it all the time , and it works really well .
Sometimes when I 'm trying to handhold a shot and I have to use a shutter speed that 's a little too slow ( meaning small shakes of my hands cause blur ) , I put the camera in continuous mode and mash the button for 2-3 seconds , collecting 10-15 images of almost exactly the same image -- but some of them will come out less shaky and significantly sharper than others .
In post-processing , I could manually compare them one by one to find the sharpest , but it 's much quicker and easier to look at the file sizes .
Having done this a few hundred times , I now no longer even bother examining the images visually at 1 : 1 zoom , because in the many that I did check carefully , file size was always an accurate indicator .
This is true with both JPEG files and CR2 ( losslessly-compressed RAW files ) .</tokentext>
<sentencetext>File size doesn't tell you anything.I use it all the time, and it works really well.
Sometimes when I'm trying to handhold a shot and I have to use a shutter speed that's a little too slow (meaning small shakes of my hands cause blur), I put the camera in continuous mode and mash the button for 2-3 seconds, collecting 10-15 images of almost exactly the same image -- but some of them will come out less shaky and significantly sharper than others.
In post-processing, I could manually compare them one by one to find the sharpest, but it's much quicker and easier to look at the file sizes.
Having done this a few hundred times, I now no longer even bother examining the images visually at 1:1 zoom, because in the many that I did check carefully, file size was always an accurate indicator.
This is true with both JPEG files and CR2 (losslessly-compressed RAW files).
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723707</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724525</id>
	<title>variation</title>
	<author>superwiz</author>
	<datestamp>1247747760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Compute the variance of the Fourier coefficients within each block and then calculate the average for each image.  The better quality image should have lower variance.  If a block has a lot of edges, then the higher frequency coefficients should have much higher values than the lower ones.  If a block is uniform, then the lower frequency coefficients should have higher values.  So if you have a good image, it will be easy to see the difference between uniform parts and edges.  That is the coefficients of the most "important" frequency within a block will be higher.  If your have a poor quality image, then not.</htmltext>
<tokenext>Compute the variance of the Fourier coefficients within each block and then calculate the average for each image .
The better quality image should have lower variance .
If a block has a lot of edges , then the higher frequency coefficients should have much higher values than the lower ones .
If a block is uniform , then the lower frequency coefficients should have higher values .
So if you have a good image , it will be easy to see the difference between uniform parts and edges .
That is the coefficients of the most " important " frequency within a block will be higher .
If your have a poor quality image , then not .</tokentext>
<sentencetext>Compute the variance of the Fourier coefficients within each block and then calculate the average for each image.
The better quality image should have lower variance.
If a block has a lot of edges, then the higher frequency coefficients should have much higher values than the lower ones.
If a block is uniform, then the lower frequency coefficients should have higher values.
So if you have a good image, it will be easy to see the difference between uniform parts and edges.
That is the coefficients of the most "important" frequency within a block will be higher.
If your have a poor quality image, then not.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723693</id>
	<title>Measure sharpness?</title>
	<author>Anonymous</author>
	<datestamp>1247743080000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>Compute the root-mean-square difference between the original image and a gaussian-blurred version?<br>JPEG tends to soften details and reduce areas of sharp contrast, so the sharper result will probably<br>be better quality.  This is similar to the PSNR metric for image quality.</p><p>Bonus: very fast, and can be done by convolution, which optimizes very efficiently.</p></htmltext>
<tokenext>Compute the root-mean-square difference between the original image and a gaussian-blurred version ? JPEG tends to soften details and reduce areas of sharp contrast , so the sharper result will probablybe better quality .
This is similar to the PSNR metric for image quality.Bonus : very fast , and can be done by convolution , which optimizes very efficiently .</tokentext>
<sentencetext>Compute the root-mean-square difference between the original image and a gaussian-blurred version?JPEG tends to soften details and reduce areas of sharp contrast, so the sharper result will probablybe better quality.
This is similar to the PSNR metric for image quality.Bonus: very fast, and can be done by convolution, which optimizes very efficiently.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724681</id>
	<title>Adobe DNG</title>
	<author>Gruff1002</author>
	<datestamp>1247748840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>How to save digital photos is a serious concern. JPEG sucks, it is not even an option. Any 24 bit option is doable. Here's the rub Adobe needs to get more open source, we can help them and they can help us.</p></htmltext>
<tokenext>How to save digital photos is a serious concern .
JPEG sucks , it is not even an option .
Any 24 bit option is doable .
Here 's the rub Adobe needs to get more open source , we can help them and they can help us .</tokentext>
<sentencetext>How to save digital photos is a serious concern.
JPEG sucks, it is not even an option.
Any 24 bit option is doable.
Here's the rub Adobe needs to get more open source, we can help them and they can help us.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726263</id>
	<title>Re:File size</title>
	<author>Air-conditioned cowh</author>
	<datestamp>1247767320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>it is lossy compression, after all . .<nobr> <wbr></nobr>.</p></div><p>Time stamps, even!</p></div>
	</htmltext>
<tokenext>it is lossy compression , after all .
. .Time stamps , even !</tokentext>
<sentencetext>it is lossy compression, after all .
. .Time stamps, even!
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726129</id>
	<title>JND Baby!</title>
	<author>Anonymous</author>
	<datestamp>1247765460000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Just Noticeable Difference.  The objective way of measuring the subjective. <a href="http://en.wikipedia.org/wiki/Difference\_limen" title="wikipedia.org" rel="nofollow">http://en.wikipedia.org/wiki/Difference\_limen</a> [wikipedia.org]</p></htmltext>
<tokenext>Just Noticeable Difference .
The objective way of measuring the subjective .
http : //en.wikipedia.org/wiki/Difference \ _limen [ wikipedia.org ]</tokentext>
<sentencetext>Just Noticeable Difference.
The objective way of measuring the subjective.
http://en.wikipedia.org/wiki/Difference\_limen [wikipedia.org]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723563</id>
	<title>the solution is simple</title>
	<author>Anonymous</author>
	<datestamp>1247742540000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>-1</modscore>
	<htmltext><p>I would have sex with a mare.</p></htmltext>
<tokenext>I would have sex with a mare .</tokentext>
<sentencetext>I would have sex with a mare.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724979</id>
	<title>Try jpgQ - JPEG Quality Estimator</title>
	<author>Anonymous</author>
	<datestamp>1247751480000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><p>jpgQ - JPEG Quality Estimator<br>http://www.mediachance.com/digicam/jpgq.htm</p></htmltext>
<tokenext>jpgQ - JPEG Quality Estimatorhttp : //www.mediachance.com/digicam/jpgq.htm</tokentext>
<sentencetext>jpgQ - JPEG Quality Estimatorhttp://www.mediachance.com/digicam/jpgq.htm</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724045</id>
	<title>Filters</title>
	<author>mypalmike</author>
	<datestamp>1247744820000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext><p>First, make a bumpmap of each image.  Then, render them onto quads with a light at a 45 degree angle to the surface normal.  Run a gaussian blur on each resulting image.  Then run a quantize filter, followed by lens flare, solarize, and edge-detect.  At this point, the answer will be clear: both images look horrible.</p></htmltext>
<tokenext>First , make a bumpmap of each image .
Then , render them onto quads with a light at a 45 degree angle to the surface normal .
Run a gaussian blur on each resulting image .
Then run a quantize filter , followed by lens flare , solarize , and edge-detect .
At this point , the answer will be clear : both images look horrible .</tokentext>
<sentencetext>First, make a bumpmap of each image.
Then, render them onto quads with a light at a 45 degree angle to the surface normal.
Run a gaussian blur on each resulting image.
Then run a quantize filter, followed by lens flare, solarize, and edge-detect.
At this point, the answer will be clear: both images look horrible.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726999</id>
	<title>Re:File size</title>
	<author>Hognoxious</author>
	<datestamp>1247822760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>jpeg is a lossless format anyways... [snip] Everytime you open and save the image it looses quality.</p></div> </blockquote><p>Do you understand what the <i>less</i> suffix means?</p></div>
	</htmltext>
<tokenext>jpeg is a lossless format anyways... [ snip ] Everytime you open and save the image it looses quality .
Do you understand what the less suffix means ?</tokentext>
<sentencetext>jpeg is a lossless format anyways... [snip] Everytime you open and save the image it looses quality.
Do you understand what the less suffix means?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724141</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726849</id>
	<title>NASA</title>
	<author>ei4anb</author>
	<datestamp>1247863080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Hello, is that you NASA ?</htmltext>
<tokenext>Hello , is that you NASA ?</tokentext>
<sentencetext>Hello, is that you NASA ?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723929</id>
	<title>Re:I'm not an expert</title>
	<author>Anonymous</author>
	<datestamp>1247744220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I'm also not an expert, but I suspect it might work in the other direction far too often.</p><p>Perhaps artifacts of low-quality jpeg images, embedded in simple stream of bmp, could look more like noise to general purpose compressor; more than "natural" photographs with gradual gradients.</p><p>And random noise is incompressible.</p></htmltext>
<tokenext>I 'm also not an expert , but I suspect it might work in the other direction far too often.Perhaps artifacts of low-quality jpeg images , embedded in simple stream of bmp , could look more like noise to general purpose compressor ; more than " natural " photographs with gradual gradients.And random noise is incompressible .</tokentext>
<sentencetext>I'm also not an expert, but I suspect it might work in the other direction far too often.Perhaps artifacts of low-quality jpeg images, embedded in simple stream of bmp, could look more like noise to general purpose compressor; more than "natural" photographs with gradual gradients.And random noise is incompressible.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723539</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28734353</id>
	<title>Re:AI problem?</title>
	<author>treeves</author>
	<datestamp>1247822100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Maybe make a RECAPTCHA problem out of it. Get more people looking at each image. Works well for digitizing old books.</htmltext>
<tokenext>Maybe make a RECAPTCHA problem out of it .
Get more people looking at each image .
Works well for digitizing old books .</tokentext>
<sentencetext>Maybe make a RECAPTCHA problem out of it.
Get more people looking at each image.
Works well for digitizing old books.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723591</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724319</id>
	<title>Image Quality Metrics.</title>
	<author>Jeremy Erwin</author>
	<datestamp>1247746380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Something like $\frac{1}{N} \sum\_{i=1}^{N}(x\_i-y\_i)^2$, where $x$ and $y$ are arrays of pixels, and $N is the number of pixels in each array?</p></htmltext>
<tokenext>Something like $ \ frac { 1 } { N } \ sum \ _ { i = 1 } ^ { N } ( x \ _i-y \ _i ) ^ 2 $ , where $ x $ and $ y $ are arrays of pixels , and $ N is the number of pixels in each array ?</tokentext>
<sentencetext>Something like $\frac{1}{N} \sum\_{i=1}^{N}(x\_i-y\_i)^2$, where $x$ and $y$ are arrays of pixels, and $N is the number of pixels in each array?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724387</id>
	<title>GREYCstoration</title>
	<author>Rashdot</author>
	<datestamp>1247746860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Run the free GREYCstoration algorithm on both images, subtract results from original, and pick the one most similar to the original: <a href="http://www.greyc.ensicaen.fr/~dtschump/greycstoration/" title="ensicaen.fr" rel="nofollow">http://www.greyc.ensicaen.fr/~dtschump/greycstoration/</a> [ensicaen.fr]</p></htmltext>
<tokenext>Run the free GREYCstoration algorithm on both images , subtract results from original , and pick the one most similar to the original : http : //www.greyc.ensicaen.fr/ ~ dtschump/greycstoration/ [ ensicaen.fr ]</tokentext>
<sentencetext>Run the free GREYCstoration algorithm on both images, subtract results from original, and pick the one most similar to the original: http://www.greyc.ensicaen.fr/~dtschump/greycstoration/ [ensicaen.fr]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725923</id>
	<title>JPEG compression - say no to jpeg!</title>
	<author>Fotograf</author>
	<datestamp>1247762640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>jpeg is plain evil.

OP problem can imo be solved by reading of JPEG compression level, sure it wont help if image is multiple times recompressed but looking up together to size and compression level from header should be enough</htmltext>
<tokenext>jpeg is plain evil .
OP problem can imo be solved by reading of JPEG compression level , sure it wont help if image is multiple times recompressed but looking up together to size and compression level from header should be enough</tokentext>
<sentencetext>jpeg is plain evil.
OP problem can imo be solved by reading of JPEG compression level, sure it wont help if image is multiple times recompressed but looking up together to size and compression level from header should be enough</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723667</id>
	<title>Re:File size</title>
	<author>Anonymous</author>
	<datestamp>1247742960000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>File size may not be accurate if it has been converted multiple times at different quality, or if the source is actually lower quality.</p><p>The only way to properly compare is if you have the original as the control.</p><p>If you compare between 2 different JPEG quality images, the program won't know which parts are the artifacts. You still have to decide yourself...</p></htmltext>
<tokenext>File size may not be accurate if it has been converted multiple times at different quality , or if the source is actually lower quality.The only way to properly compare is if you have the original as the control.If you compare between 2 different JPEG quality images , the program wo n't know which parts are the artifacts .
You still have to decide yourself.. .</tokentext>
<sentencetext>File size may not be accurate if it has been converted multiple times at different quality, or if the source is actually lower quality.The only way to properly compare is if you have the original as the control.If you compare between 2 different JPEG quality images, the program won't know which parts are the artifacts.
You still have to decide yourself...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726935</id>
	<title>shameless plug</title>
	<author>pyropunk51</author>
	<datestamp>1247821560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I'm assuming you want to automatically/programmatically discard the one with the least/most artifacts. In this case there are very few programs around, but I'm working on a rules engine for my program that may be able to help you in future. Please evaluate DuMP3 at <a href="http://dump3.sourceforge.net/" title="sourceforge.net" rel="nofollow">http://dump3.sourceforge.net/</a> [sourceforge.net] to see if it may suit your needs.</p></htmltext>
<tokenext>I 'm assuming you want to automatically/programmatically discard the one with the least/most artifacts .
In this case there are very few programs around , but I 'm working on a rules engine for my program that may be able to help you in future .
Please evaluate DuMP3 at http : //dump3.sourceforge.net/ [ sourceforge.net ] to see if it may suit your needs .</tokentext>
<sentencetext>I'm assuming you want to automatically/programmatically discard the one with the least/most artifacts.
In this case there are very few programs around, but I'm working on a rules engine for my program that may be able to help you in future.
Please evaluate DuMP3 at http://dump3.sourceforge.net/ [sourceforge.net] to see if it may suit your needs.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28727785</id>
	<title>KISS</title>
	<author>SNACKeR</author>
	<datestamp>1247834160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If you know you have the original files, the file with the oldest date has the best quality. Else, go by file size first, and break ties using the oldest date as the winner.</p></htmltext>
<tokenext>If you know you have the original files , the file with the oldest date has the best quality .
Else , go by file size first , and break ties using the oldest date as the winner .</tokentext>
<sentencetext>If you know you have the original files, the file with the oldest date has the best quality.
Else, go by file size first, and break ties using the oldest date as the winner.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725275</id>
	<title>Re:Easy or using evolution like this...</title>
	<author>barwasp</author>
	<datestamp>1247754000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><a href="http://www.colordev.com/TILEBG/show\_screens.php" title="colordev.com" rel="nofollow">Tiled background</a> [colordev.com] images by evolution<br>
<a href="http://www.colordev.com/HBAR/show\_screens.php" title="colordev.com" rel="nofollow">Horizontal 3d bars</a> [colordev.com] by evolution<br>
<a href="http://www.colordev.com/VBAR/show\_screens.php" title="colordev.com" rel="nofollow">Vertical 3d bars</a> [colordev.com] by evolution</htmltext>
<tokenext>Tiled background [ colordev.com ] images by evolution Horizontal 3d bars [ colordev.com ] by evolution Vertical 3d bars [ colordev.com ] by evolution</tokentext>
<sentencetext>Tiled background [colordev.com] images by evolution
Horizontal 3d bars [colordev.com] by evolution
Vertical 3d bars [colordev.com] by evolution</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723511</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724731</id>
	<title>Subjective...</title>
	<author>GWBasic</author>
	<datestamp>1247749320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Well, your problem is that image quality is subjective.  Can computers make good subjective judgements?  Not really.</p><p>Let's say you count the number of pixels that are different?  Well, what if JPEG usually slightly alters the brightness?  You could weight the difference, but what if JPEG sometimes moves an edge by a pixel?</p><p>I think if you study a bit about how JPEG works, you might find that you can computationally determine how much information that is lost; but that does not mean that your computed number in any way is related to what a human will say the image quality is.</p></htmltext>
<tokenext>Well , your problem is that image quality is subjective .
Can computers make good subjective judgements ?
Not really.Let 's say you count the number of pixels that are different ?
Well , what if JPEG usually slightly alters the brightness ?
You could weight the difference , but what if JPEG sometimes moves an edge by a pixel ? I think if you study a bit about how JPEG works , you might find that you can computationally determine how much information that is lost ; but that does not mean that your computed number in any way is related to what a human will say the image quality is .</tokentext>
<sentencetext>Well, your problem is that image quality is subjective.
Can computers make good subjective judgements?
Not really.Let's say you count the number of pixels that are different?
Well, what if JPEG usually slightly alters the brightness?
You could weight the difference, but what if JPEG sometimes moves an edge by a pixel?I think if you study a bit about how JPEG works, you might find that you can computationally determine how much information that is lost; but that does not mean that your computed number in any way is related to what a human will say the image quality is.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723537</id>
	<title>Admit your a huge faggot</title>
	<author>Anonymous</author>
	<datestamp>1247742420000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>Then rape your face.</p></htmltext>
<tokenext>Then rape your face .</tokentext>
<sentencetext>Then rape your face.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723753</id>
	<title>The most obvious artefects...</title>
	<author>91degrees</author>
	<datestamp>1247743320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Seems that if I really overcompress a JPEG, the main problems are at the edges of the blocks.  This is not really unexpected.  <br> <br>
So a simple first pass would be to apply a simple edge detector and look for discontinuities at the edges of the 8x8 blocks.  For an example, just try an edge detector in any decent image editing app on an overcompressed JPEG.</htmltext>
<tokenext>Seems that if I really overcompress a JPEG , the main problems are at the edges of the blocks .
This is not really unexpected .
So a simple first pass would be to apply a simple edge detector and look for discontinuities at the edges of the 8x8 blocks .
For an example , just try an edge detector in any decent image editing app on an overcompressed JPEG .</tokentext>
<sentencetext>Seems that if I really overcompress a JPEG, the main problems are at the edges of the blocks.
This is not really unexpected.
So a simple first pass would be to apply a simple edge detector and look for discontinuities at the edges of the 8x8 blocks.
For an example, just try an edge detector in any decent image editing app on an overcompressed JPEG.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726945</id>
	<title>Re:AI problem?</title>
	<author>CarpetShark</author>
	<datestamp>1247821680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>Unfortunately, I think you may find that it will simply require a human-level brain.</p></div></blockquote><p>OK, great.  Now where can I find a donkey?</p></div>
	</htmltext>
<tokenext>Unfortunately , I think you may find that it will simply require a human-level brain.OK , great .
Now where can I find a donkey ?</tokentext>
<sentencetext>Unfortunately, I think you may find that it will simply require a human-level brain.OK, great.
Now where can I find a donkey?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723509</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725457</id>
	<title>Re:Filters</title>
	<author>asifyoucare</author>
	<datestamp>1247756160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You forgot the most important step - putting the images on a wooden table and rephotographing them.</p></htmltext>
<tokenext>You forgot the most important step - putting the images on a wooden table and rephotographing them .</tokentext>
<sentencetext>You forgot the most important step - putting the images on a wooden table and rephotographing them.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724045</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726235</id>
	<title>win32 GQView</title>
	<author>u64</author>
	<datestamp>1247766840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>First i 'jpegtran' all files to even-out different compression methods. Then 'fdupes' and delete all identical files.
<br>
In Windows:
for<nobr> <wbr></nobr>/R<nobr> <wbr></nobr>.\ \%\%1 in (*.jpg *.jpeg) do jpegtran -optimize -perfect -copy none -progressive "\%\%1" "\%\%1"<br>
Duplicate File Finder (Empty RecycleBin before to avoid confusion)
<br>Then i use GQView (exist for both Linux and win32).
Set Preferences, Advanced, Custom Similarity to 98\% to begin with.
GQView Menu, New Collection, Load list of files and select Compare.<br>
<br>BONUS DISK-SPACE: jscl.exe -d -j -n -r -s *.jpg<br>hihi</htmltext>
<tokenext>First i 'jpegtran ' all files to even-out different compression methods .
Then 'fdupes ' and delete all identical files .
In Windows : for /R . \ \ % \ % 1 in ( * .jpg * .jpeg ) do jpegtran -optimize -perfect -copy none -progressive " \ % \ % 1 " " \ % \ % 1 " Duplicate File Finder ( Empty RecycleBin before to avoid confusion ) Then i use GQView ( exist for both Linux and win32 ) .
Set Preferences , Advanced , Custom Similarity to 98 \ % to begin with .
GQView Menu , New Collection , Load list of files and select Compare .
BONUS DISK-SPACE : jscl.exe -d -j -n -r -s * .jpghihi</tokentext>
<sentencetext>First i 'jpegtran' all files to even-out different compression methods.
Then 'fdupes' and delete all identical files.
In Windows:
for /R .\ \%\%1 in (*.jpg *.jpeg) do jpegtran -optimize -perfect -copy none -progressive "\%\%1" "\%\%1"
Duplicate File Finder (Empty RecycleBin before to avoid confusion)
Then i use GQView (exist for both Linux and win32).
Set Preferences, Advanced, Custom Similarity to 98\% to begin with.
GQView Menu, New Collection, Load list of files and select Compare.
BONUS DISK-SPACE: jscl.exe -d -j -n -r -s *.jpghihi</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28727417</id>
	<title>inverse DCT comparison</title>
	<author>Anonymous</author>
	<datestamp>1247828940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>JPEG Images are built of 8x8 Blocks. Thoser are then DCT'd (Discrete Cosine Transform) in order to get the Block's frequency spectrum. The element x=1, y=1 is the so-called DC-Channel (Like Direct Current). It is usually the average of the whole 8x8 Block. The other positions are frequencies incresing with the position (e.g. pos x=2 is one whole oscillation, where x=4 are 2 (or 4, don't remember) oscillations).</p><p>Now to the task. If you look at the DC-Component, and the other components are relatively small, this means that there is not much information in this block (e.g. if it is just a blue spot in a picture with the sky). However, if you have two similar pictures, you can compare block by block. The picture ehich has higher components in the higher x and y values will be the one with the better quality, since high frequency means: high details.</p><p>Of course, implementing this be difficult. There is not just DCT involved, but also a zip like algorithm, and the actal compressions is done by "rounding" the components values to integers (since DCT itself doesn't do any compression).</p><p>Maybe one could adapt a jpeg library by inserting some code in the decompression algorithm which creates a "fingerprint" of the individual blocks, and then compare it with the other picture's fingerprint. I think the result shoudl really tell the quality difference.</p><p>Cheers</p></htmltext>
<tokenext>JPEG Images are built of 8x8 Blocks .
Thoser are then DCT 'd ( Discrete Cosine Transform ) in order to get the Block 's frequency spectrum .
The element x = 1 , y = 1 is the so-called DC-Channel ( Like Direct Current ) .
It is usually the average of the whole 8x8 Block .
The other positions are frequencies incresing with the position ( e.g .
pos x = 2 is one whole oscillation , where x = 4 are 2 ( or 4 , do n't remember ) oscillations ) .Now to the task .
If you look at the DC-Component , and the other components are relatively small , this means that there is not much information in this block ( e.g .
if it is just a blue spot in a picture with the sky ) .
However , if you have two similar pictures , you can compare block by block .
The picture ehich has higher components in the higher x and y values will be the one with the better quality , since high frequency means : high details.Of course , implementing this be difficult .
There is not just DCT involved , but also a zip like algorithm , and the actal compressions is done by " rounding " the components values to integers ( since DCT itself does n't do any compression ) .Maybe one could adapt a jpeg library by inserting some code in the decompression algorithm which creates a " fingerprint " of the individual blocks , and then compare it with the other picture 's fingerprint .
I think the result shoudl really tell the quality difference.Cheers</tokentext>
<sentencetext>JPEG Images are built of 8x8 Blocks.
Thoser are then DCT'd (Discrete Cosine Transform) in order to get the Block's frequency spectrum.
The element x=1, y=1 is the so-called DC-Channel (Like Direct Current).
It is usually the average of the whole 8x8 Block.
The other positions are frequencies incresing with the position (e.g.
pos x=2 is one whole oscillation, where x=4 are 2 (or 4, don't remember) oscillations).Now to the task.
If you look at the DC-Component, and the other components are relatively small, this means that there is not much information in this block (e.g.
if it is just a blue spot in a picture with the sky).
However, if you have two similar pictures, you can compare block by block.
The picture ehich has higher components in the higher x and y values will be the one with the better quality, since high frequency means: high details.Of course, implementing this be difficult.
There is not just DCT involved, but also a zip like algorithm, and the actal compressions is done by "rounding" the components values to integers (since DCT itself doesn't do any compression).Maybe one could adapt a jpeg library by inserting some code in the decompression algorithm which creates a "fingerprint" of the individual blocks, and then compare it with the other picture's fingerprint.
I think the result shoudl really tell the quality difference.Cheers</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723719</id>
	<title>use a "difference matte"</title>
	<author>Anonymous</author>
	<datestamp>1247743200000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>load up both images in adobe after effects or some other image compositing program and apply a "difference matte"</p><p>Any differences in pixel values between the two images will show up as black on a white background or vise versa...</p><p>adam<br>BOXXlabs</p></htmltext>
<tokenext>load up both images in adobe after effects or some other image compositing program and apply a " difference matte " Any differences in pixel values between the two images will show up as black on a white background or vise versa...adamBOXXlabs</tokentext>
<sentencetext>load up both images in adobe after effects or some other image compositing program and apply a "difference matte"Any differences in pixel values between the two images will show up as black on a white background or vise versa...adamBOXXlabs</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28729421</id>
	<title>Eyeballing it works best</title>
	<author>Anonymous</author>
	<datestamp>1247843760000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Just looking at the image works best, especially when you have to judge between for example an image that has higher resolution and one that has less artifacts. The only way you can really tell which one will look best to you is by looking at it.</p></htmltext>
<tokenext>Just looking at the image works best , especially when you have to judge between for example an image that has higher resolution and one that has less artifacts .
The only way you can really tell which one will look best to you is by looking at it .</tokentext>
<sentencetext>Just looking at the image works best, especially when you have to judge between for example an image that has higher resolution and one that has less artifacts.
The only way you can really tell which one will look best to you is by looking at it.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723569</id>
	<title>Well...</title>
	<author>Anonymous</author>
	<datestamp>1247742540000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>If you want to know which image has more artefacts, it would still be hard to tell what is an artefact and whats supposed to be part of the image.</p><p>If you just want to know which is more compressed.. dont jpeg images store the compression ratio used the last time they were saved?  It should be in the header somewhere.</p></htmltext>
<tokenext>If you want to know which image has more artefacts , it would still be hard to tell what is an artefact and whats supposed to be part of the image.If you just want to know which is more compressed.. dont jpeg images store the compression ratio used the last time they were saved ?
It should be in the header somewhere .</tokentext>
<sentencetext>If you want to know which image has more artefacts, it would still be hard to tell what is an artefact and whats supposed to be part of the image.If you just want to know which is more compressed.. dont jpeg images store the compression ratio used the last time they were saved?
It should be in the header somewhere.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723943</id>
	<title>Well</title>
	<author>Anonymous</author>
	<datestamp>1247744340000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>You could just open the low quality images and save them with a higher quality setting.</p></htmltext>
<tokenext>You could just open the low quality images and save them with a higher quality setting .</tokentext>
<sentencetext>You could just open the low quality images and save them with a higher quality setting.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725233</id>
	<title>What difference does it make?</title>
	<author>tpstigers</author>
	<datestamp>1247753700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Umm...... I have to ask.  If you can't tell just by looking at them, what difference does it make?</htmltext>
<tokenext>Umm...... I have to ask .
If you ca n't tell just by looking at them , what difference does it make ?</tokentext>
<sentencetext>Umm...... I have to ask.
If you can't tell just by looking at them, what difference does it make?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724161</id>
	<title>Re:image quality measures</title>
	<author>Anonymous</author>
	<datestamp>1247745300000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Any idea where to get hold of that paper? It only pops up as a reference for me.</p></htmltext>
<tokenext>Any idea where to get hold of that paper ?
It only pops up as a reference for me .</tokentext>
<sentencetext>Any idea where to get hold of that paper?
It only pops up as a reference for me.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723813</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725383</id>
	<title>Re:Easy</title>
	<author>Hurricane78</author>
	<datestamp>1247755140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>This can be scripted too. With imagemagick.</p></htmltext>
<tokenext>This can be scripted too .
With imagemagick .</tokentext>
<sentencetext>This can be scripted too.
With imagemagick.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723511</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28731903</id>
	<title>Re:File size</title>
	<author>changedx</author>
	<datestamp>1247854320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>The tricky thing is that the OP is asking two separate questions:
<br>
1) How do I group images of similar content (e.g. Natalie Portman eating hot grits) which may be of different dimensions/resolutions?
<br>
2) How do I choose the best archetype in each group?
<br> <br>
For 1, the above poster is correct: 95\% of the time, the better image will have a larger file size.
<br>
For 2, you'll need specialized software that can measure image similarity.  If the software doesn't do automatic resize/rescale, you'll need to script that too.</htmltext>
<tokenext>The tricky thing is that the OP is asking two separate questions : 1 ) How do I group images of similar content ( e.g .
Natalie Portman eating hot grits ) which may be of different dimensions/resolutions ?
2 ) How do I choose the best archetype in each group ?
For 1 , the above poster is correct : 95 \ % of the time , the better image will have a larger file size .
For 2 , you 'll need specialized software that can measure image similarity .
If the software does n't do automatic resize/rescale , you 'll need to script that too .</tokentext>
<sentencetext>The tricky thing is that the OP is asking two separate questions:

1) How do I group images of similar content (e.g.
Natalie Portman eating hot grits) which may be of different dimensions/resolutions?
2) How do I choose the best archetype in each group?
For 1, the above poster is correct: 95\% of the time, the better image will have a larger file size.
For 2, you'll need specialized software that can measure image similarity.
If the software doesn't do automatic resize/rescale, you'll need to script that too.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725545</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724101</id>
	<title>different images?</title>
	<author>Anonymous</author>
	<datestamp>1247745060000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>"It's known that saving the same source image in JPEG format at different quality levels produces different images"</p><p>news to me.</p></htmltext>
<tokenext>" It 's known that saving the same source image in JPEG format at different quality levels produces different images " news to me .</tokentext>
<sentencetext>"It's known that saving the same source image in JPEG format at different quality levels produces different images"news to me.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28728971</id>
	<title>Re:File size</title>
	<author>Mr. Suck</author>
	<datestamp>1247841900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Another more common example of this issue is the artifacts potentially introduced when an image is resized (resampled) - different resampling algorithms have differing quality.</p><p>A potentially intractable aspect of this problem is that there is no reference image supplied - your proposed algorithms have nothing concrete to be scored against so you have no way to objectively pick the best one.</p></htmltext>
<tokenext>Another more common example of this issue is the artifacts potentially introduced when an image is resized ( resampled ) - different resampling algorithms have differing quality.A potentially intractable aspect of this problem is that there is no reference image supplied - your proposed algorithms have nothing concrete to be scored against so you have no way to objectively pick the best one .</tokentext>
<sentencetext>Another more common example of this issue is the artifacts potentially introduced when an image is resized (resampled) - different resampling algorithms have differing quality.A potentially intractable aspect of this problem is that there is no reference image supplied - your proposed algorithms have nothing concrete to be scored against so you have no way to objectively pick the best one.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723651</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724885</id>
	<title>Entropy?</title>
	<author>Anonymous</author>
	<datestamp>1247750580000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The 'quality' of a picture, as stated, is still a bit vague.  If you have an image of a completely blue wall, I believe the entire picture could be compressed to a single 'artifact', yet retain the same amount of information as a bitmap.  Perhaps what you're after is the amount of information given in an image.<br>Information Theory should help there.  http://en.wikipedia.org/wiki/Entropy\_(information\_theory)<br>One quick and dirty method might take the histogram of the image, and then find the one with the greatest (or least) standard deviation.  You could map light/depth, colors, etc to the histogram and see which one best suits your needs.  It's not flawless (if for some reason you wanted a very blue wall instead of picking up little defects of dirt) but it could work.</p></htmltext>
<tokenext>The 'quality ' of a picture , as stated , is still a bit vague .
If you have an image of a completely blue wall , I believe the entire picture could be compressed to a single 'artifact ' , yet retain the same amount of information as a bitmap .
Perhaps what you 're after is the amount of information given in an image.Information Theory should help there .
http : //en.wikipedia.org/wiki/Entropy \ _ ( information \ _theory ) One quick and dirty method might take the histogram of the image , and then find the one with the greatest ( or least ) standard deviation .
You could map light/depth , colors , etc to the histogram and see which one best suits your needs .
It 's not flawless ( if for some reason you wanted a very blue wall instead of picking up little defects of dirt ) but it could work .</tokentext>
<sentencetext>The 'quality' of a picture, as stated, is still a bit vague.
If you have an image of a completely blue wall, I believe the entire picture could be compressed to a single 'artifact', yet retain the same amount of information as a bitmap.
Perhaps what you're after is the amount of information given in an image.Information Theory should help there.
http://en.wikipedia.org/wiki/Entropy\_(information\_theory)One quick and dirty method might take the histogram of the image, and then find the one with the greatest (or least) standard deviation.
You could map light/depth, colors, etc to the histogram and see which one best suits your needs.
It's not flawless (if for some reason you wanted a very blue wall instead of picking up little defects of dirt) but it could work.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723825</id>
	<title>Re:File size</title>
	<author>Anonymous</author>
	<datestamp>1247743740000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p><a href="http://linux.maruhn.com/sec/jpegoptim.html" title="maruhn.com">http://linux.maruhn.com/sec/jpegoptim.html</a> [maruhn.com]</p><p>No.  You can compress JPEG lossless.</p></htmltext>
<tokenext>http : //linux.maruhn.com/sec/jpegoptim.html [ maruhn.com ] No .
You can compress JPEG lossless .</tokentext>
<sentencetext>http://linux.maruhn.com/sec/jpegoptim.html [maruhn.com]No.
You can compress JPEG lossless.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726779</id>
	<title>Re:AI problem?</title>
	<author>Lord Crc</author>
	<datestamp>1247862180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Even simpler mathematical analysis would include such techniques as seeing which one takes up more disk space. Last I checked, that was very highly correlated with compression level.</p></div><p>The problem is that there are many choices left to the compression program which affect the quality/size trade-off. A high quality compression program might generate optimized quantization tables for that specific image, resulting in a superior image at lower bitrate compared to say the standard libjpeg implementation.</p></div>
	</htmltext>
<tokenext>Even simpler mathematical analysis would include such techniques as seeing which one takes up more disk space .
Last I checked , that was very highly correlated with compression level.The problem is that there are many choices left to the compression program which affect the quality/size trade-off .
A high quality compression program might generate optimized quantization tables for that specific image , resulting in a superior image at lower bitrate compared to say the standard libjpeg implementation .</tokentext>
<sentencetext>Even simpler mathematical analysis would include such techniques as seeing which one takes up more disk space.
Last I checked, that was very highly correlated with compression level.The problem is that there are many choices left to the compression program which affect the quality/size trade-off.
A high quality compression program might generate optimized quantization tables for that specific image, resulting in a superior image at lower bitrate compared to say the standard libjpeg implementation.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724765</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724655</id>
	<title>Just sort by the size</title>
	<author>MikeBabcock</author>
	<datestamp>1247748660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>JPEG is pretty efficient at compressing images -- the only way they get smaller on average is by increasing the quality loss.  Therefore, the larger of the two images in bytes is probably the better looking copy.</p></htmltext>
<tokenext>JPEG is pretty efficient at compressing images -- the only way they get smaller on average is by increasing the quality loss .
Therefore , the larger of the two images in bytes is probably the better looking copy .</tokentext>
<sentencetext>JPEG is pretty efficient at compressing images -- the only way they get smaller on average is by increasing the quality loss.
Therefore, the larger of the two images in bytes is probably the better looking copy.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725775</id>
	<title>Re:AI problem?</title>
	<author>iluvcapra</author>
	<datestamp>1247761080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I'm pretty sure it's impossible, information-theoretically, to examine the bitmap of several images and decide which among them is of the "highest quality," because you in order to decide the fidelity of an image you need the original un-lossy image, to compare with the others to make an objective determination of the total signal noise and distortion.  Either that, or somehow have metadata in the file that captures knowledge of how much data was lost in the transform.</p><p>You could use mturk or some sort of program that detects the signature of particular JPEG artifacts, but in the end this will just be heuristic, and won't give you a positive answer.  For all of these heuristics, I'd bet simply nominating the largest of all files found to be of the same image will pick the best image as often as any more sophisticated method.</p></htmltext>
<tokenext>I 'm pretty sure it 's impossible , information-theoretically , to examine the bitmap of several images and decide which among them is of the " highest quality , " because you in order to decide the fidelity of an image you need the original un-lossy image , to compare with the others to make an objective determination of the total signal noise and distortion .
Either that , or somehow have metadata in the file that captures knowledge of how much data was lost in the transform.You could use mturk or some sort of program that detects the signature of particular JPEG artifacts , but in the end this will just be heuristic , and wo n't give you a positive answer .
For all of these heuristics , I 'd bet simply nominating the largest of all files found to be of the same image will pick the best image as often as any more sophisticated method .</tokentext>
<sentencetext>I'm pretty sure it's impossible, information-theoretically, to examine the bitmap of several images and decide which among them is of the "highest quality," because you in order to decide the fidelity of an image you need the original un-lossy image, to compare with the others to make an objective determination of the total signal noise and distortion.
Either that, or somehow have metadata in the file that captures knowledge of how much data was lost in the transform.You could use mturk or some sort of program that detects the signature of particular JPEG artifacts, but in the end this will just be heuristic, and won't give you a positive answer.
For all of these heuristics, I'd bet simply nominating the largest of all files found to be of the same image will pick the best image as often as any more sophisticated method.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723509</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726373</id>
	<title>Try VisiPics (freeware.)</title>
	<author>Anonymous</author>
	<datestamp>1247768640000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Try VisiPics (freeware.)</p></htmltext>
<tokenext>Try VisiPics ( freeware .
)</tokentext>
<sentencetext>Try VisiPics (freeware.
)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724407</id>
	<title>Re:File size</title>
	<author>izomiac</author>
	<datestamp>1247746920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>File size doesn't tell you anything. If I take a picture with a bunch of noise (eg. poor lighting) in it then it will not compress as well. If I take the same picture with perfect lighting it might be higher quality but smaller file size.</p></div></blockquote><p>

That sounds like you took two different pictures and have two different files.  Comparing file size obvious wouldn't work for different pictures, nor could I see why anyone would want to automatically delete one of them.  But if it's the same picture, just more highly compressed, then the file size would almost certainly be greater for the less compressed image.  Essentially by definition, since that's the whole point of compression.</p></div>
	</htmltext>
<tokenext>File size does n't tell you anything .
If I take a picture with a bunch of noise ( eg .
poor lighting ) in it then it will not compress as well .
If I take the same picture with perfect lighting it might be higher quality but smaller file size .
That sounds like you took two different pictures and have two different files .
Comparing file size obvious would n't work for different pictures , nor could I see why anyone would want to automatically delete one of them .
But if it 's the same picture , just more highly compressed , then the file size would almost certainly be greater for the less compressed image .
Essentially by definition , since that 's the whole point of compression .</tokentext>
<sentencetext>File size doesn't tell you anything.
If I take a picture with a bunch of noise (eg.
poor lighting) in it then it will not compress as well.
If I take the same picture with perfect lighting it might be higher quality but smaller file size.
That sounds like you took two different pictures and have two different files.
Comparing file size obvious wouldn't work for different pictures, nor could I see why anyone would want to automatically delete one of them.
But if it's the same picture, just more highly compressed, then the file size would almost certainly be greater for the less compressed image.
Essentially by definition, since that's the whole point of compression.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723707</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724995</id>
	<title>Re:File size</title>
	<author>Anonymous</author>
	<datestamp>1247751600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Why is yours modded up higher I wonder. The OP wants to "compare two visually similar JPEG images and select the one with the fewest JPEG artifacts". That means they're the same image. That means file size will help you there, unless they're not the same resolution, although it should do regardless.

</p><p> <i>If I take a picture with a bunch of noise (eg. poor lighting) in it then it will not compress as well. If I take the same picture with perfect lighting it might be higher quality but smaller file size.</i> </p><p>That's if you compensate for the poor lighting so it appears as bright though. But yeah, at that point that makes it completely different images, so why talk about lighting or noise, why not talk about the smoothness of features photographed or whatever else.

</p><p>Which reminds me, am I the only one who can tell from looking at the various file sizes in a folder containing a set of photographs (let's say, porn) which is a close up or not?</p></htmltext>
<tokenext>Why is yours modded up higher I wonder .
The OP wants to " compare two visually similar JPEG images and select the one with the fewest JPEG artifacts " .
That means they 're the same image .
That means file size will help you there , unless they 're not the same resolution , although it should do regardless .
If I take a picture with a bunch of noise ( eg .
poor lighting ) in it then it will not compress as well .
If I take the same picture with perfect lighting it might be higher quality but smaller file size .
That 's if you compensate for the poor lighting so it appears as bright though .
But yeah , at that point that makes it completely different images , so why talk about lighting or noise , why not talk about the smoothness of features photographed or whatever else .
Which reminds me , am I the only one who can tell from looking at the various file sizes in a folder containing a set of photographs ( let 's say , porn ) which is a close up or not ?</tokentext>
<sentencetext>Why is yours modded up higher I wonder.
The OP wants to "compare two visually similar JPEG images and select the one with the fewest JPEG artifacts".
That means they're the same image.
That means file size will help you there, unless they're not the same resolution, although it should do regardless.
If I take a picture with a bunch of noise (eg.
poor lighting) in it then it will not compress as well.
If I take the same picture with perfect lighting it might be higher quality but smaller file size.
That's if you compensate for the poor lighting so it appears as bright though.
But yeah, at that point that makes it completely different images, so why talk about lighting or noise, why not talk about the smoothness of features photographed or whatever else.
Which reminds me, am I the only one who can tell from looking at the various file sizes in a folder containing a set of photographs (let's say, porn) which is a close up or not?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723707</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724301</id>
	<title>Look at the DCT coefficients</title>
	<author>uhmmmm</author>
	<datestamp>1247746140000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>JPEG works by breaking the image into 8x8 blocks and doing a two dimensional discrete cosine transform on each of the color planes for each block.  At this point, no information is lost (except possibly by some slight inaccuracies converting from RGB to YUV as is used in JPEG).  The step where the artifacts are introduced is in quantizing the coefficients.  High frequency coefficients are considered less important and are quantized more than low frequency coefficients.  The level of quantization is raised across the board to increase the level of compression.</p><p>Now, how is this useful?  The reason heavily quantizing results in higher compression is because the coefficients get smaller.  In fact, many become zero, which is particularly good for compression - and the high frequency coefficients in particular tend towards zero.  So partially decode the images and look at the DCT coefficients.  The image with more high frequency coefficients which are zero is likely the lower quality one.</p></htmltext>
<tokenext>JPEG works by breaking the image into 8x8 blocks and doing a two dimensional discrete cosine transform on each of the color planes for each block .
At this point , no information is lost ( except possibly by some slight inaccuracies converting from RGB to YUV as is used in JPEG ) .
The step where the artifacts are introduced is in quantizing the coefficients .
High frequency coefficients are considered less important and are quantized more than low frequency coefficients .
The level of quantization is raised across the board to increase the level of compression.Now , how is this useful ?
The reason heavily quantizing results in higher compression is because the coefficients get smaller .
In fact , many become zero , which is particularly good for compression - and the high frequency coefficients in particular tend towards zero .
So partially decode the images and look at the DCT coefficients .
The image with more high frequency coefficients which are zero is likely the lower quality one .</tokentext>
<sentencetext>JPEG works by breaking the image into 8x8 blocks and doing a two dimensional discrete cosine transform on each of the color planes for each block.
At this point, no information is lost (except possibly by some slight inaccuracies converting from RGB to YUV as is used in JPEG).
The step where the artifacts are introduced is in quantizing the coefficients.
High frequency coefficients are considered less important and are quantized more than low frequency coefficients.
The level of quantization is raised across the board to increase the level of compression.Now, how is this useful?
The reason heavily quantizing results in higher compression is because the coefficients get smaller.
In fact, many become zero, which is particularly good for compression - and the high frequency coefficients in particular tend towards zero.
So partially decode the images and look at the DCT coefficients.
The image with more high frequency coefficients which are zero is likely the lower quality one.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28727981</id>
	<title>Re:I'm not an expert</title>
	<author>Anonymous</author>
	<datestamp>1247836200000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Wow, that's awesome!</p><p>I tried it with XnView:<br>Compressed an original image (Image A) to jpeg 50\% (Image B) so that I had two comparable images.<br>Converted original and compressed to bitmaps (Image C and Image D). These of course are the same size (20737kb).<br>Then converted the bitmaps to same quality jpegs (Image E and Image F).<br>Image E came to 1124kb<br>Image F came to 806kb</p><p>Genius...</p></htmltext>
<tokenext>Wow , that 's awesome ! I tried it with XnView : Compressed an original image ( Image A ) to jpeg 50 \ % ( Image B ) so that I had two comparable images.Converted original and compressed to bitmaps ( Image C and Image D ) .
These of course are the same size ( 20737kb ) .Then converted the bitmaps to same quality jpegs ( Image E and Image F ) .Image E came to 1124kbImage F came to 806kbGenius.. .</tokentext>
<sentencetext>Wow, that's awesome!I tried it with XnView:Compressed an original image (Image A) to jpeg 50\% (Image B) so that I had two comparable images.Converted original and compressed to bitmaps (Image C and Image D).
These of course are the same size (20737kb).Then converted the bitmaps to same quality jpegs (Image E and Image F).Image E came to 1124kbImage F came to 806kbGenius...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723539</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725967</id>
	<title>There are too many variables.</title>
	<author>thesandbender</author>
	<datestamp>1247763180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>The simple fact of the matter is that what you perceive as a "better" image, others won't.  You may look at the primary subject matter, other will look at that and the background.  You may be concerned about the contrast on the picture while others will look at the colors.

While I understand that you're really looking for a good median there is truth to the axiom that "a picture says a thousand words".  Anytime you monkey with it, you're stripping at least a few those words away.

I think a better question is not "how do a compress this picture" but "what pictures should I keep".

Just my $.02</htmltext>
<tokenext>The simple fact of the matter is that what you perceive as a " better " image , others wo n't .
You may look at the primary subject matter , other will look at that and the background .
You may be concerned about the contrast on the picture while others will look at the colors .
While I understand that you 're really looking for a good median there is truth to the axiom that " a picture says a thousand words " .
Anytime you monkey with it , you 're stripping at least a few those words away .
I think a better question is not " how do a compress this picture " but " what pictures should I keep " .
Just my $ .02</tokentext>
<sentencetext>The simple fact of the matter is that what you perceive as a "better" image, others won't.
You may look at the primary subject matter, other will look at that and the background.
You may be concerned about the contrast on the picture while others will look at the colors.
While I understand that you're really looking for a good median there is truth to the axiom that "a picture says a thousand words".
Anytime you monkey with it, you're stripping at least a few those words away.
I think a better question is not "how do a compress this picture" but "what pictures should I keep".
Just my $.02</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724061</id>
	<title>Re:AI problem?</title>
	<author>CajunArson</author>
	<datestamp>1247744940000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext><p>I don't know about "quality", but frankly it shouldn't be too hard to compare similar images just by doing simple mathematical analysis on the results.  I'm only vaguely familiar with image compression, but if a "worse" JPEG image is more blocky, would it be possible to run edge detection to find the most clearly defined blocks that indicates a particular picture is producing "worse" results?  That's just one idea, I'm sure people who know the compression better can name many other properties that could easily be measured automatically.<br>What a computer can't do is tell you if the image is subjectively worse, unless the same metric that the human uses to subjectively judge a picture happens to match the algorithm the computer is using, and even then it could vary by picture to picture.  For example, a highly colorful picture might hide the artifacting much better than a picture that features lots of text.  While the "blockiness" would be the same mathematically, the subjective human viewing it will notice the artifacts in the text much more.</p></htmltext>
<tokenext>I do n't know about " quality " , but frankly it should n't be too hard to compare similar images just by doing simple mathematical analysis on the results .
I 'm only vaguely familiar with image compression , but if a " worse " JPEG image is more blocky , would it be possible to run edge detection to find the most clearly defined blocks that indicates a particular picture is producing " worse " results ?
That 's just one idea , I 'm sure people who know the compression better can name many other properties that could easily be measured automatically.What a computer ca n't do is tell you if the image is subjectively worse , unless the same metric that the human uses to subjectively judge a picture happens to match the algorithm the computer is using , and even then it could vary by picture to picture .
For example , a highly colorful picture might hide the artifacting much better than a picture that features lots of text .
While the " blockiness " would be the same mathematically , the subjective human viewing it will notice the artifacts in the text much more .</tokentext>
<sentencetext>I don't know about "quality", but frankly it shouldn't be too hard to compare similar images just by doing simple mathematical analysis on the results.
I'm only vaguely familiar with image compression, but if a "worse" JPEG image is more blocky, would it be possible to run edge detection to find the most clearly defined blocks that indicates a particular picture is producing "worse" results?
That's just one idea, I'm sure people who know the compression better can name many other properties that could easily be measured automatically.What a computer can't do is tell you if the image is subjectively worse, unless the same metric that the human uses to subjectively judge a picture happens to match the algorithm the computer is using, and even then it could vary by picture to picture.
For example, a highly colorful picture might hide the artifacting much better than a picture that features lots of text.
While the "blockiness" would be the same mathematically, the subjective human viewing it will notice the artifacts in the text much more.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723509</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724765</id>
	<title>Re:AI problem?</title>
	<author>moderatorrater</author>
	<datestamp>1247749560000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>Even simpler mathematical analysis would include such techniques as seeing which one takes up more disk space. Last I checked, that was very highly correlated with compression level.</htmltext>
<tokenext>Even simpler mathematical analysis would include such techniques as seeing which one takes up more disk space .
Last I checked , that was very highly correlated with compression level .</tokentext>
<sentencetext>Even simpler mathematical analysis would include such techniques as seeing which one takes up more disk space.
Last I checked, that was very highly correlated with compression level.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724061</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725133</id>
	<title>All these replies are right yet all are wrong.</title>
	<author>FlyingGuy</author>
	<datestamp>1247752800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You are asking a machine to make a comparison between "good" and "not good" or "OK" and "fantastic" when all of these choices are by their very nature illusory at best.</p><p>Consider a photo of a person.  I may prefer a softer focus some my prefer sharper, other more color saturation of a pastoral scene others less. Individuals judge an image in many many different ways.</p><p>In my youth I did a lot of photography.  I was taking pictures of the Winternationals at Fremont Raceway ( when it still existed. ) and was shooting a funny car as it came off the line.  I was shooting tri-x and pushing it a full stop which resulted in a grainy negative.  I did some darkroom magic and came up with a very eye catching and award winning photo.  But if you mechanically compared it to the straight shot it would haev been inferior.</p><p>The point is you can use an computer to compare some things, but you cannot use a computer to judge "better" in an artistic sense or a "pleasing to the eye" sense.</p></htmltext>
<tokenext>You are asking a machine to make a comparison between " good " and " not good " or " OK " and " fantastic " when all of these choices are by their very nature illusory at best.Consider a photo of a person .
I may prefer a softer focus some my prefer sharper , other more color saturation of a pastoral scene others less .
Individuals judge an image in many many different ways.In my youth I did a lot of photography .
I was taking pictures of the Winternationals at Fremont Raceway ( when it still existed .
) and was shooting a funny car as it came off the line .
I was shooting tri-x and pushing it a full stop which resulted in a grainy negative .
I did some darkroom magic and came up with a very eye catching and award winning photo .
But if you mechanically compared it to the straight shot it would haev been inferior.The point is you can use an computer to compare some things , but you can not use a computer to judge " better " in an artistic sense or a " pleasing to the eye " sense .</tokentext>
<sentencetext>You are asking a machine to make a comparison between "good" and "not good" or "OK" and "fantastic" when all of these choices are by their very nature illusory at best.Consider a photo of a person.
I may prefer a softer focus some my prefer sharper, other more color saturation of a pastoral scene others less.
Individuals judge an image in many many different ways.In my youth I did a lot of photography.
I was taking pictures of the Winternationals at Fremont Raceway ( when it still existed.
) and was shooting a funny car as it came off the line.
I was shooting tri-x and pushing it a full stop which resulted in a grainy negative.
I did some darkroom magic and came up with a very eye catching and award winning photo.
But if you mechanically compared it to the straight shot it would haev been inferior.The point is you can use an computer to compare some things, but you cannot use a computer to judge "better" in an artistic sense or a "pleasing to the eye" sense.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28729279</id>
	<title>Re:AI problem?</title>
	<author>Liquidretro</author>
	<datestamp>1247843160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I agree, sounds like a algorithm for mathmatica to me.  If you are serious about this there are people on Flickr that run detailed mathematical image analysis that compare camera to camera, sensor to sensor, etc for things like noise and other properties.  I would think one of them might be able to help you figure out how to do this best.

You do not want to use people for this process.  Most people unless specially trained and actually care are bad at spotting key differences in photos.</htmltext>
<tokenext>I agree , sounds like a algorithm for mathmatica to me .
If you are serious about this there are people on Flickr that run detailed mathematical image analysis that compare camera to camera , sensor to sensor , etc for things like noise and other properties .
I would think one of them might be able to help you figure out how to do this best .
You do not want to use people for this process .
Most people unless specially trained and actually care are bad at spotting key differences in photos .</tokentext>
<sentencetext>I agree, sounds like a algorithm for mathmatica to me.
If you are serious about this there are people on Flickr that run detailed mathematical image analysis that compare camera to camera, sensor to sensor, etc for things like noise and other properties.
I would think one of them might be able to help you figure out how to do this best.
You do not want to use people for this process.
Most people unless specially trained and actually care are bad at spotting key differences in photos.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724061</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724933</id>
	<title>Use judge</title>
	<author>Anonymous</author>
	<datestamp>1247750940000</datestamp>
	<modclass>Redundant</modclass>
	<modscore>0</modscore>
	<htmltext><a href="http://oldhome.schmorp.de/marc/judge.html" title="schmorp.de">Judge</a> [schmorp.de]. It's not perfect, but it works.</htmltext>
<tokenext>Judge [ schmorp.de ] .
It 's not perfect , but it works .</tokentext>
<sentencetext>Judge [schmorp.de].
It's not perfect, but it works.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723695</id>
	<title>DCT</title>
	<author>tomz16</author>
	<datestamp>1247743080000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>Just look at the manner in which JPEGs are encoded for your answer!</p><p>Take the DCT (discrete cosine transform) of blocks of pixels throughout the image.  Examine the frequency content of the each of these blocks and determine the amount of spatial frequency suppression.  This will correlate with the quality factor used during compression!</p><p>
&nbsp; &nbsp;</p></htmltext>
<tokenext>Just look at the manner in which JPEGs are encoded for your answer ! Take the DCT ( discrete cosine transform ) of blocks of pixels throughout the image .
Examine the frequency content of the each of these blocks and determine the amount of spatial frequency suppression .
This will correlate with the quality factor used during compression !
   </tokentext>
<sentencetext>Just look at the manner in which JPEGs are encoded for your answer!Take the DCT (discrete cosine transform) of blocks of pixels throughout the image.
Examine the frequency content of the each of these blocks and determine the amount of spatial frequency suppression.
This will correlate with the quality factor used during compression!
   </sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725399</id>
	<title>Try looking at the histograms of Y, Cb, Cr</title>
	<author>pclminion</author>
	<datestamp>1247755320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Take advantage of the fact that JPEG quantized the chrominance information more aggressively at higher compression levels. Quite ridiculously so, in fact. Look at these three images. The first two are the Cb and Cr channels of a highly-compressed JPEG. The third is the luminance channel. Notice that there is WAY more information contained in the luminance channel. This effect gets more and more extreme as JPEG quality goes down.</p><p> <a href="http://neuralnw.com/sd/jpeg.html" title="neuralnw.com">Histograms</a> [neuralnw.com] </p><p>Quantifying this is a different question. Look at the histograms of each of the three channels. The histogram of Cb and Cr is extremely sparse, with a few large peaks, but with no energy in most buckets. The luminance channel, on the other hand, has a much more detailed histogram. I leave it up to the reader to create a formula to boil this all down to a single number.</p></htmltext>
<tokenext>Take advantage of the fact that JPEG quantized the chrominance information more aggressively at higher compression levels .
Quite ridiculously so , in fact .
Look at these three images .
The first two are the Cb and Cr channels of a highly-compressed JPEG .
The third is the luminance channel .
Notice that there is WAY more information contained in the luminance channel .
This effect gets more and more extreme as JPEG quality goes down .
Histograms [ neuralnw.com ] Quantifying this is a different question .
Look at the histograms of each of the three channels .
The histogram of Cb and Cr is extremely sparse , with a few large peaks , but with no energy in most buckets .
The luminance channel , on the other hand , has a much more detailed histogram .
I leave it up to the reader to create a formula to boil this all down to a single number .</tokentext>
<sentencetext>Take advantage of the fact that JPEG quantized the chrominance information more aggressively at higher compression levels.
Quite ridiculously so, in fact.
Look at these three images.
The first two are the Cb and Cr channels of a highly-compressed JPEG.
The third is the luminance channel.
Notice that there is WAY more information contained in the luminance channel.
This effect gets more and more extreme as JPEG quality goes down.
Histograms [neuralnw.com] Quantifying this is a different question.
Look at the histograms of each of the three channels.
The histogram of Cb and Cr is extremely sparse, with a few large peaks, but with no energy in most buckets.
The luminance channel, on the other hand, has a much more detailed histogram.
I leave it up to the reader to create a formula to boil this all down to a single number.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723729</id>
	<title>Try ThumbsPlus</title>
	<author>Anonymous</author>
	<datestamp>1247743260000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>ThumbsPlus is an image management tool. It has a feature called "find similar" that should do what you want as far as identifying to pictures that are the same except for the compression level. Once the similar picture is found you can use ThumbsPlus to look at the file sizes and see which one is bigger.</p></htmltext>
<tokenext>ThumbsPlus is an image management tool .
It has a feature called " find similar " that should do what you want as far as identifying to pictures that are the same except for the compression level .
Once the similar picture is found you can use ThumbsPlus to look at the file sizes and see which one is bigger .</tokentext>
<sentencetext>ThumbsPlus is an image management tool.
It has a feature called "find similar" that should do what you want as far as identifying to pictures that are the same except for the compression level.
Once the similar picture is found you can use ThumbsPlus to look at the file sizes and see which one is bigger.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723715</id>
	<title>Re:I'm not an expert</title>
	<author>Bill, Shooter of Bul</author>
	<datestamp>1247743200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Good idea. I'm also not an expert. Though, I would think there is a limit to how well this would work. If it were cell shaded to some extent, it might look better than a lossy jpg, but compress to a smaller size. The question is if there would be any point in between where loss of information would actually result in better image quality.</p><p>Imagine a chess board is in the image. If an image is sort of lossy, the lines between the black and white might get a little blurred with some black running into some white and visa versa. If you just made the entire board to be a flat gray that averaged the two, it might look better to a human, even if that isn't what the original image was.</p></htmltext>
<tokenext>Good idea .
I 'm also not an expert .
Though , I would think there is a limit to how well this would work .
If it were cell shaded to some extent , it might look better than a lossy jpg , but compress to a smaller size .
The question is if there would be any point in between where loss of information would actually result in better image quality.Imagine a chess board is in the image .
If an image is sort of lossy , the lines between the black and white might get a little blurred with some black running into some white and visa versa .
If you just made the entire board to be a flat gray that averaged the two , it might look better to a human , even if that is n't what the original image was .</tokentext>
<sentencetext>Good idea.
I'm also not an expert.
Though, I would think there is a limit to how well this would work.
If it were cell shaded to some extent, it might look better than a lossy jpg, but compress to a smaller size.
The question is if there would be any point in between where loss of information would actually result in better image quality.Imagine a chess board is in the image.
If an image is sort of lossy, the lines between the black and white might get a little blurred with some black running into some white and visa versa.
If you just made the entire board to be a flat gray that averaged the two, it might look better to a human, even if that isn't what the original image was.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723539</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725747</id>
	<title>Re:Measure sharpness?</title>
	<author>Hurricane78</author>
	<datestamp>1247760780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Uuum... I think he does not have the original. And I think that is the point. Because if he had the original, he could, you know,<nobr> <wbr></nobr>...use <em>that one</em>!<nobr> <wbr></nobr>;)</p></htmltext>
<tokenext>Uuum... I think he does not have the original .
And I think that is the point .
Because if he had the original , he could , you know , ...use that one !
; )</tokentext>
<sentencetext>Uuum... I think he does not have the original.
And I think that is the point.
Because if he had the original, he could, you know, ...use that one!
;)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723693</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723565</id>
	<title>Share your suggestions</title>
	<author>gehrehmee</author>
	<datestamp>1247742540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Given a set a pictures, it would be really nice to see them grouped by "these are several pictures of the same scene/object/subject". This is a tool I'm not aware of yet, and I'd love to hear what open-source tools people are using.</p><p>As a next step, it would be neat to pick out the one that's most in focus...</p></htmltext>
<tokenext>Given a set a pictures , it would be really nice to see them grouped by " these are several pictures of the same scene/object/subject " .
This is a tool I 'm not aware of yet , and I 'd love to hear what open-source tools people are using.As a next step , it would be neat to pick out the one that 's most in focus.. .</tokentext>
<sentencetext>Given a set a pictures, it would be really nice to see them grouped by "these are several pictures of the same scene/object/subject".
This is a tool I'm not aware of yet, and I'd love to hear what open-source tools people are using.As a next step, it would be neat to pick out the one that's most in focus...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723509</id>
	<title>AI problem?</title>
	<author>Anonymous</author>
	<datestamp>1247742300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Unfortunately, I think you may find that it will simply require a human-level brain. I'd be really impressed with software that said, "Yep, this image just *looks* better to me."

Unless, of course, JPG artifacts are systematic and consistent across images, which could well be.</htmltext>
<tokenext>Unfortunately , I think you may find that it will simply require a human-level brain .
I 'd be really impressed with software that said , " Yep , this image just * looks * better to me .
" Unless , of course , JPG artifacts are systematic and consistent across images , which could well be .</tokentext>
<sentencetext>Unfortunately, I think you may find that it will simply require a human-level brain.
I'd be really impressed with software that said, "Yep, this image just *looks* better to me.
"

Unless, of course, JPG artifacts are systematic and consistent across images, which could well be.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724977</id>
	<title>Re:AI problem?</title>
	<author>arose</author>
	<datestamp>1247751480000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext>AI or <a href="http://oldhome.schmorp.de/marc/judge.html" title="schmorp.de">small utility</a> [schmorp.de]... You never know with computers<nobr> <wbr></nobr>;)</htmltext>
<tokenext>AI or small utility [ schmorp.de ] ... You never know with computers ; )</tokentext>
<sentencetext>AI or small utility [schmorp.de]... You never know with computers ;)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723509</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723575</id>
	<title>Try compressing both further</title>
	<author>Ed Avis</author>
	<datestamp>1247742540000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>I suppose you could recompress both images as JPEG with various quality settings, then do a pixel-by-pixel comparison computing a difference measure between each of the two source images and its recompressed version.  Presumably, the one with more JPEG artefacts to start with will be more similar to its compressed version, at a certain key level of compression.  This relies on your compression program generating the same kind of artefacts as the one used to make the images, but I suppose that cjpeg with the default settings has a good chance of working.</p><p>Failing that, just take the larger (in bytes) of the two JPEG files...</p></htmltext>
<tokenext>I suppose you could recompress both images as JPEG with various quality settings , then do a pixel-by-pixel comparison computing a difference measure between each of the two source images and its recompressed version .
Presumably , the one with more JPEG artefacts to start with will be more similar to its compressed version , at a certain key level of compression .
This relies on your compression program generating the same kind of artefacts as the one used to make the images , but I suppose that cjpeg with the default settings has a good chance of working.Failing that , just take the larger ( in bytes ) of the two JPEG files.. .</tokentext>
<sentencetext>I suppose you could recompress both images as JPEG with various quality settings, then do a pixel-by-pixel comparison computing a difference measure between each of the two source images and its recompressed version.
Presumably, the one with more JPEG artefacts to start with will be more similar to its compressed version, at a certain key level of compression.
This relies on your compression program generating the same kind of artefacts as the one used to make the images, but I suppose that cjpeg with the default settings has a good chance of working.Failing that, just take the larger (in bytes) of the two JPEG files...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724171</id>
	<title>Re:Measure sharpness?</title>
	<author>Anonymous</author>
	<datestamp>1247745360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Also, JPEG works on blocks.  While it's true that JPEG gets rid of high frequency details first (and thus results in blurring), this is only useful within each block.  You can have high contrast areas at the edge of each block, and this is actually often some of the most annoying artifacting in images compressed at very low quality.  So just because it has sharp edges doesn't mean it's high quality.</p></htmltext>
<tokenext>Also , JPEG works on blocks .
While it 's true that JPEG gets rid of high frequency details first ( and thus results in blurring ) , this is only useful within each block .
You can have high contrast areas at the edge of each block , and this is actually often some of the most annoying artifacting in images compressed at very low quality .
So just because it has sharp edges does n't mean it 's high quality .</tokentext>
<sentencetext>Also, JPEG works on blocks.
While it's true that JPEG gets rid of high frequency details first (and thus results in blurring), this is only useful within each block.
You can have high contrast areas at the edge of each block, and this is actually often some of the most annoying artifacting in images compressed at very low quality.
So just because it has sharp edges doesn't mean it's high quality.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723693</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724963</id>
	<title>How about "Date Modified"?</title>
	<author>tomsomething</author>
	<datestamp>1247751240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>If by "near-duplicate" you mean different files that were actually once the same image, sorting by "date modified" might give you satisfactory results. Of course, I'm making certain assumtions here about how the images were acquired and why there are multiple versions, and only you will know if this applies to your situation, but I would suspect that the older files would be of better quality.</htmltext>
<tokenext>If by " near-duplicate " you mean different files that were actually once the same image , sorting by " date modified " might give you satisfactory results .
Of course , I 'm making certain assumtions here about how the images were acquired and why there are multiple versions , and only you will know if this applies to your situation , but I would suspect that the older files would be of better quality .</tokentext>
<sentencetext>If by "near-duplicate" you mean different files that were actually once the same image, sorting by "date modified" might give you satisfactory results.
Of course, I'm making certain assumtions here about how the images were acquired and why there are multiple versions, and only you will know if this applies to your situation, but I would suspect that the older files would be of better quality.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724033</id>
	<title>Re:File size</title>
	<author>Anonymous</author>
	<datestamp>1247744760000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Except the guy didn't ask about any of that--all he asked about was jpeg artifacts.</p></htmltext>
<tokenext>Except the guy did n't ask about any of that--all he asked about was jpeg artifacts .</tokentext>
<sentencetext>Except the guy didn't ask about any of that--all he asked about was jpeg artifacts.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723651</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28735277</id>
	<title>A few ideas</title>
	<author>petermgreen</author>
	<datestamp>1247826540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>1: count how many unique values there each DCT coeffeciant. If you only find a small number then it probablly means the image has been through low quality jpeg compression. This method may be fooled if the image has been cropped in a way that changes the block boundried though.<br>2: check for excessive high frequency noise, this may indicate the image has been dithered in the past. OTOH excessively low high frequencies may indicate heavy jpeg compression.</p><p>IMO storage is cheap so what I would do is make a database which could index the various copies of each image. You could things arranged so there was one version the software considered "probablly best" but if you really needed the best quality copy you could go back and check manually.</p></htmltext>
<tokenext>1 : count how many unique values there each DCT coeffeciant .
If you only find a small number then it probablly means the image has been through low quality jpeg compression .
This method may be fooled if the image has been cropped in a way that changes the block boundried though.2 : check for excessive high frequency noise , this may indicate the image has been dithered in the past .
OTOH excessively low high frequencies may indicate heavy jpeg compression.IMO storage is cheap so what I would do is make a database which could index the various copies of each image .
You could things arranged so there was one version the software considered " probablly best " but if you really needed the best quality copy you could go back and check manually .</tokentext>
<sentencetext>1: count how many unique values there each DCT coeffeciant.
If you only find a small number then it probablly means the image has been through low quality jpeg compression.
This method may be fooled if the image has been cropped in a way that changes the block boundried though.2: check for excessive high frequency noise, this may indicate the image has been dithered in the past.
OTOH excessively low high frequencies may indicate heavy jpeg compression.IMO storage is cheap so what I would do is make a database which could index the various copies of each image.
You could things arranged so there was one version the software considered "probablly best" but if you really needed the best quality copy you could go back and check manually.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724273</id>
	<title>How about audio?</title>
	<author>bondiblueos9</author>
	<datestamp>1247746020000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>I would very much like to do the same with audio.  I have so many duplicate tracks in my music collection in different formats and bitrates.</htmltext>
<tokenext>I would very much like to do the same with audio .
I have so many duplicate tracks in my music collection in different formats and bitrates .</tokentext>
<sentencetext>I would very much like to do the same with audio.
I have so many duplicate tracks in my music collection in different formats and bitrates.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725663</id>
	<title>Re:DCT</title>
	<author>Anonymous</author>
	<datestamp>1247759940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>Just look at the manner in which JPEGs are encoded for your answer!</p><p>Take the DCT (discrete cosine transform) of blocks of pixels throughout the image.  Examine the frequency content of the each of these blocks and determine the amount of spatial frequency suppression.  This will correlate with the quality factor used during compression!</p><p>
&nbsp; </p></div><p>good call. this the only remotely correct answer to the actual question on here so far.</p></div>
	</htmltext>
<tokenext>Just look at the manner in which JPEGs are encoded for your answer ! Take the DCT ( discrete cosine transform ) of blocks of pixels throughout the image .
Examine the frequency content of the each of these blocks and determine the amount of spatial frequency suppression .
This will correlate with the quality factor used during compression !
  good call .
this the only remotely correct answer to the actual question on here so far .</tokentext>
<sentencetext>Just look at the manner in which JPEGs are encoded for your answer!Take the DCT (discrete cosine transform) of blocks of pixels throughout the image.
Examine the frequency content of the each of these blocks and determine the amount of spatial frequency suppression.
This will correlate with the quality factor used during compression!
  good call.
this the only remotely correct answer to the actual question on here so far.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723695</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726609</id>
	<title>Structural Similarity Index Method (SSIM)</title>
	<author>Paridel</author>
	<datestamp>1247772660000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>In general your best bet would be to use an image quality metric that takes into account how the human visual system works. The 2D frequency response of the human eye looks something like a diamond, which means that we see vertical and horizontal frequencies better than diagonal ones.
<br> <br>
In fact, most image compression techniques (including JPEG) take this into account, however, conventional ways of determining the noise in images (minimum mean squared error, peak signal to noise, root mean squares) don't factor in the human visual system.
<br> <br>
Your best bet is to use something like the structural similarity method (SSIM) by Prof. Al Bovik of UT Austin and his student Prof. Zhou Wang (now at the University of Waterloo).
<br> <br>
You can read all about SSIM and get example code here:
<a href="http://www.ece.uwaterloo.ca/~z70wang/research/ssim/" title="uwaterloo.ca" rel="nofollow">http://www.ece.uwaterloo.ca/~z70wang/research/ssim/</a> [uwaterloo.ca]
<br> <br>
Or read more about image quality assessment at Prof. Bovik's website:
<a href="http://live.ece.utexas.edu/research/Quality/index.htm" title="utexas.edu" rel="nofollow">http://live.ece.utexas.edu/research/Quality/index.htm</a> [utexas.edu]
<br> <br>
If you don't care about how it works, and just want to use it, you can get example code for ssim in matlab at that website and C floating around the net. The method is easy to use; essentially the ssim function takes two images and returns a number between 0 and 1 that describes how similar the images are. Given two compressed images and the original image, take the SSIM between each and the original. The compressed image with the higher SSIM value is the "best".
<br> <br>
It sounds like for your problem you might NOT have the original uncompressed image. In that case you might try checking for minimal entropy or maximum contrast in your images.
<br> <br>
Essentially entropy would be calculated as:
<br> <br>
h = histogram(Image);<br>
p = h./(number of pixels in image);<br>
entropy = -sum(p./log2(p));
<br> <br>
You will need to make sure you scale the image appropriately and don't divide by zero! Or better yet, you should be able to find code for image entropy and contrast on the web. Just try searching for entropy.m for a matlab version.
<br> <br>
Good luck!</htmltext>
<tokenext>In general your best bet would be to use an image quality metric that takes into account how the human visual system works .
The 2D frequency response of the human eye looks something like a diamond , which means that we see vertical and horizontal frequencies better than diagonal ones .
In fact , most image compression techniques ( including JPEG ) take this into account , however , conventional ways of determining the noise in images ( minimum mean squared error , peak signal to noise , root mean squares ) do n't factor in the human visual system .
Your best bet is to use something like the structural similarity method ( SSIM ) by Prof. Al Bovik of UT Austin and his student Prof. Zhou Wang ( now at the University of Waterloo ) .
You can read all about SSIM and get example code here : http : //www.ece.uwaterloo.ca/ ~ z70wang/research/ssim/ [ uwaterloo.ca ] Or read more about image quality assessment at Prof. Bovik 's website : http : //live.ece.utexas.edu/research/Quality/index.htm [ utexas.edu ] If you do n't care about how it works , and just want to use it , you can get example code for ssim in matlab at that website and C floating around the net .
The method is easy to use ; essentially the ssim function takes two images and returns a number between 0 and 1 that describes how similar the images are .
Given two compressed images and the original image , take the SSIM between each and the original .
The compressed image with the higher SSIM value is the " best " .
It sounds like for your problem you might NOT have the original uncompressed image .
In that case you might try checking for minimal entropy or maximum contrast in your images .
Essentially entropy would be calculated as : h = histogram ( Image ) ; p = h./ ( number of pixels in image ) ; entropy = -sum ( p./log2 ( p ) ) ; You will need to make sure you scale the image appropriately and do n't divide by zero !
Or better yet , you should be able to find code for image entropy and contrast on the web .
Just try searching for entropy.m for a matlab version .
Good luck !</tokentext>
<sentencetext>In general your best bet would be to use an image quality metric that takes into account how the human visual system works.
The 2D frequency response of the human eye looks something like a diamond, which means that we see vertical and horizontal frequencies better than diagonal ones.
In fact, most image compression techniques (including JPEG) take this into account, however, conventional ways of determining the noise in images (minimum mean squared error, peak signal to noise, root mean squares) don't factor in the human visual system.
Your best bet is to use something like the structural similarity method (SSIM) by Prof. Al Bovik of UT Austin and his student Prof. Zhou Wang (now at the University of Waterloo).
You can read all about SSIM and get example code here:
http://www.ece.uwaterloo.ca/~z70wang/research/ssim/ [uwaterloo.ca]
 
Or read more about image quality assessment at Prof. Bovik's website:
http://live.ece.utexas.edu/research/Quality/index.htm [utexas.edu]
 
If you don't care about how it works, and just want to use it, you can get example code for ssim in matlab at that website and C floating around the net.
The method is easy to use; essentially the ssim function takes two images and returns a number between 0 and 1 that describes how similar the images are.
Given two compressed images and the original image, take the SSIM between each and the original.
The compressed image with the higher SSIM value is the "best".
It sounds like for your problem you might NOT have the original uncompressed image.
In that case you might try checking for minimal entropy or maximum contrast in your images.
Essentially entropy would be calculated as:
 
h = histogram(Image);
p = h./(number of pixels in image);
entropy = -sum(p./log2(p));
 
You will need to make sure you scale the image appropriately and don't divide by zero!
Or better yet, you should be able to find code for image entropy and contrast on the web.
Just try searching for entropy.m for a matlab version.
Good luck!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28728661</id>
	<title>Re:File size</title>
	<author>david.gilbert</author>
	<datestamp>1247840520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>You're gonna have to change your thinking if you ever want to be a consultant.</htmltext>
<tokenext>You 're gon na have to change your thinking if you ever want to be a consultant .</tokentext>
<sentencetext>You're gonna have to change your thinking if you ever want to be a consultant.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725545</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724579</id>
	<title>difference</title>
	<author>collywally</author>
	<datestamp>1247748120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>This is how I check for how much compression i have in my images. <br>
1. Grab the original and the jpeg into photoshop (or whatever you use)  <br>
2. do a difference as your transfer mode.  This will show you how different it is.<br>
3. find out the value of all the pixels (I don't know ad them together or something)<br>
Repeat the above steps with the second picture. <br>
whichever is more is the one that is more different (why does that sound like bad English to me?) will be the lower quality image.<br>
Use python and the PIL (python image library) to automate the whole thing and thats it.</htmltext>
<tokenext>This is how I check for how much compression i have in my images .
1. Grab the original and the jpeg into photoshop ( or whatever you use ) 2. do a difference as your transfer mode .
This will show you how different it is .
3. find out the value of all the pixels ( I do n't know ad them together or something ) Repeat the above steps with the second picture .
whichever is more is the one that is more different ( why does that sound like bad English to me ?
) will be the lower quality image .
Use python and the PIL ( python image library ) to automate the whole thing and thats it .</tokentext>
<sentencetext>This is how I check for how much compression i have in my images.
1. Grab the original and the jpeg into photoshop (or whatever you use)  
2. do a difference as your transfer mode.
This will show you how different it is.
3. find out the value of all the pixels (I don't know ad them together or something)
Repeat the above steps with the second picture.
whichever is more is the one that is more different (why does that sound like bad English to me?
) will be the lower quality image.
Use python and the PIL (python image library) to automate the whole thing and thats it.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723779</id>
	<title>edge detection</title>
	<author>Anonymous</author>
	<datestamp>1247743440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>use an edge-detection filter. since jpeg artifacts usually present themselves as "smeared out" edges, you may be able to figure out some rule based on the edge-detected image.</p></htmltext>
<tokenext>use an edge-detection filter .
since jpeg artifacts usually present themselves as " smeared out " edges , you may be able to figure out some rule based on the edge-detected image .</tokentext>
<sentencetext>use an edge-detection filter.
since jpeg artifacts usually present themselves as "smeared out" edges, you may be able to figure out some rule based on the edge-detected image.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725445</id>
	<title>Maybe I'm thinking too simple here, but:</title>
	<author>Hurricane78</author>
	<datestamp>1247755980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>What about just, you know... looking at them?<br>And if you can't tell the difference, does it matter then? (Just take the smaller one.)<br>That is my approach.</p><p>If you want the best one, even when you can't see the difference, just take the biggest one.<br>If the codec is the same, the chance that a higher quality image is smaller, is zero.</p><p>There, I solved it for you.<nobr> <wbr></nobr>:D<br>Or as a funny advertisement for a newspaper said:</p><p><div class="quote"><p> <a href="http://www.youtube.com/watch?v=imYGd7PPouw" title="youtube.com">[Image of a shiny pen.]</a> [youtube.com]<br><a href="http://www.youtube.com/watch?v=imYGd7PPouw" title="youtube.com">Before the first manned flight to space, NASA developed a pen, that can write in zero gravity, without the ink leaking.</a> [youtube.com]<br><a href="http://www.youtube.com/watch?v=imYGd7PPouw" title="youtube.com">The development costs amounted to $12 million.</a> [youtube.com]<br><a href="http://www.youtube.com/watch?v=imYGd7PPouw" title="youtube.com">[Removes pen, and puts a pencil in its place.]</a> [youtube.com]<br><a href="http://www.youtube.com/watch?v=imYGd7PPouw" title="youtube.com">That's... how the Russians solved the problem.</a> [youtube.com]</p> </div></div>
	</htmltext>
<tokenext>What about just , you know... looking at them ? And if you ca n't tell the difference , does it matter then ?
( Just take the smaller one .
) That is my approach.If you want the best one , even when you ca n't see the difference , just take the biggest one.If the codec is the same , the chance that a higher quality image is smaller , is zero.There , I solved it for you .
: DOr as a funny advertisement for a newspaper said : [ Image of a shiny pen .
] [ youtube.com ] Before the first manned flight to space , NASA developed a pen , that can write in zero gravity , without the ink leaking .
[ youtube.com ] The development costs amounted to $ 12 million .
[ youtube.com ] [ Removes pen , and puts a pencil in its place .
] [ youtube.com ] That 's... how the Russians solved the problem .
[ youtube.com ]</tokentext>
<sentencetext>What about just, you know... looking at them?And if you can't tell the difference, does it matter then?
(Just take the smaller one.
)That is my approach.If you want the best one, even when you can't see the difference, just take the biggest one.If the codec is the same, the chance that a higher quality image is smaller, is zero.There, I solved it for you.
:DOr as a funny advertisement for a newspaper said: [Image of a shiny pen.
] [youtube.com]Before the first manned flight to space, NASA developed a pen, that can write in zero gravity, without the ink leaking.
[youtube.com]The development costs amounted to $12 million.
[youtube.com][Removes pen, and puts a pencil in its place.
] [youtube.com]That's... how the Russians solved the problem.
[youtube.com] 
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723921</id>
	<title>Blur Detection?</title>
	<author>HashDefine</author>
	<datestamp>1247744220000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p> I wonder if out of focus or blue detection methods will give you a metric which varies with the level of jpeg artifcats, after all the jpeg artifacts should make it more difficult to do things like edge detections etc which are the same the things that made more difficult by blurry and out of focus images</p><p>A google search for blur detection should bring up things that you can try, <a href="http://www.kerrywong.com/2009/06/19/image-blur-detection-via-hough-transform-i/" title="kerrywong.com" rel="nofollow">Here</a> [kerrywong.com] is series of posts that to do a good job of explaining some of the work involved</p></htmltext>
<tokenext>I wonder if out of focus or blue detection methods will give you a metric which varies with the level of jpeg artifcats , after all the jpeg artifacts should make it more difficult to do things like edge detections etc which are the same the things that made more difficult by blurry and out of focus imagesA google search for blur detection should bring up things that you can try , Here [ kerrywong.com ] is series of posts that to do a good job of explaining some of the work involved</tokentext>
<sentencetext> I wonder if out of focus or blue detection methods will give you a metric which varies with the level of jpeg artifcats, after all the jpeg artifacts should make it more difficult to do things like edge detections etc which are the same the things that made more difficult by blurry and out of focus imagesA google search for blur detection should bring up things that you can try, Here [kerrywong.com] is series of posts that to do a good job of explaining some of the work involved</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725469</id>
	<title>Re:File size</title>
	<author>Hurricane78</author>
	<datestamp>1247756280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Well, that's why you <em>look</em> at it first. If you can't tell the difference, and want the "best quality" anyway, you got the same disease an as "audiophile", and I recommend some Monster display cables to go with it.<nobr> <wbr></nobr>:P</p></htmltext>
<tokenext>Well , that 's why you look at it first .
If you ca n't tell the difference , and want the " best quality " anyway , you got the same disease an as " audiophile " , and I recommend some Monster display cables to go with it .
: P</tokentext>
<sentencetext>Well, that's why you look at it first.
If you can't tell the difference, and want the "best quality" anyway, you got the same disease an as "audiophile", and I recommend some Monster display cables to go with it.
:P</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723651</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723931</id>
	<title>Fourier transform</title>
	<author>maxwell demon</author>
	<datestamp>1247744220000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Assuming the only quality loss is due to JPEG compression, I guess a fourier transform should give you a hint: I think the worse quality image should have lower amplitude of high frequencies.</p><p>Of course, that criterion may be misleading if the image was otherwise modified. For example noise filters will typically reduce high frequencies as well, but you'd generally consider the result superior (otherwise you woldn't have applied the filter).</p></htmltext>
<tokenext>Assuming the only quality loss is due to JPEG compression , I guess a fourier transform should give you a hint : I think the worse quality image should have lower amplitude of high frequencies.Of course , that criterion may be misleading if the image was otherwise modified .
For example noise filters will typically reduce high frequencies as well , but you 'd generally consider the result superior ( otherwise you wold n't have applied the filter ) .</tokentext>
<sentencetext>Assuming the only quality loss is due to JPEG compression, I guess a fourier transform should give you a hint: I think the worse quality image should have lower amplitude of high frequencies.Of course, that criterion may be misleading if the image was otherwise modified.
For example noise filters will typically reduce high frequencies as well, but you'd generally consider the result superior (otherwise you woldn't have applied the filter).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723543</id>
	<title>File Size.</title>
	<author>Anonymous</author>
	<datestamp>1247742420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Larger file size should give a rough hint if both images are the same format (i.e. JPEG). But you've probably already thought of that.</p><p>Of course, there ought to be better ways...</p></htmltext>
<tokenext>Larger file size should give a rough hint if both images are the same format ( i.e .
JPEG ) . But you 've probably already thought of that.Of course , there ought to be better ways.. .</tokentext>
<sentencetext>Larger file size should give a rough hint if both images are the same format (i.e.
JPEG). But you've probably already thought of that.Of course, there ought to be better ways...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725553</id>
	<title>Re:DCT</title>
	<author>eggnoglatte</author>
	<datestamp>1247757120000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>That works, but only if you have exact, pixel-to-pixel correspondence between the photos. It won't work if you just grab 2 photos from flicker that both show the Eiffel tower, and you wonder which one is "better".</p><p>Luckly, there is a simple way to do it: use jpegtran to extract the quantization table  form each image. Pick the one with the smaller values. This can easily be scripted.</p><p>Caveat: this will not work if the images have been decoded and re-coded multiple times.</p></htmltext>
<tokenext>That works , but only if you have exact , pixel-to-pixel correspondence between the photos .
It wo n't work if you just grab 2 photos from flicker that both show the Eiffel tower , and you wonder which one is " better " .Luckly , there is a simple way to do it : use jpegtran to extract the quantization table form each image .
Pick the one with the smaller values .
This can easily be scripted.Caveat : this will not work if the images have been decoded and re-coded multiple times .</tokentext>
<sentencetext>That works, but only if you have exact, pixel-to-pixel correspondence between the photos.
It won't work if you just grab 2 photos from flicker that both show the Eiffel tower, and you wonder which one is "better".Luckly, there is a simple way to do it: use jpegtran to extract the quantization table  form each image.
Pick the one with the smaller values.
This can easily be scripted.Caveat: this will not work if the images have been decoded and re-coded multiple times.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723695</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724537</id>
	<title>Tough process, have a look at the frequency domain</title>
	<author>Anonymous</author>
	<datestamp>1247747880000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>If you've identified two images as the same (can be done by comparing pics of the same spatial resolution (make sure to low pass filter before resize to avoid artifacting!)  and looking at the mean sum of square differences for really small differences... you'll have to play around with tolerance to find if its the "same" but and I'd always keep a weary eye... it'll just find similar images IMO), then you just have to take a look at their frequency domain counterpart images.  The images with the most detail will have more energy in the high frequencies than the other  less detailed images.<br>On the other hand, strictly for seeing who has the most artifacting, if you've identified images as the "same", the completely horizontal and vertical high frequencies should have lots of energy (by comparison wrt the good image and within the bad image itself) to make all those blocks.<br>Matlab makes it easy to visualize and transform this kind of stuff so take a look at its image processing toolbox or documentation (docs freely available online).</p></htmltext>
<tokenext>If you 've identified two images as the same ( can be done by comparing pics of the same spatial resolution ( make sure to low pass filter before resize to avoid artifacting !
) and looking at the mean sum of square differences for really small differences... you 'll have to play around with tolerance to find if its the " same " but and I 'd always keep a weary eye... it 'll just find similar images IMO ) , then you just have to take a look at their frequency domain counterpart images .
The images with the most detail will have more energy in the high frequencies than the other less detailed images.On the other hand , strictly for seeing who has the most artifacting , if you 've identified images as the " same " , the completely horizontal and vertical high frequencies should have lots of energy ( by comparison wrt the good image and within the bad image itself ) to make all those blocks.Matlab makes it easy to visualize and transform this kind of stuff so take a look at its image processing toolbox or documentation ( docs freely available online ) .</tokentext>
<sentencetext>If you've identified two images as the same (can be done by comparing pics of the same spatial resolution (make sure to low pass filter before resize to avoid artifacting!
)  and looking at the mean sum of square differences for really small differences... you'll have to play around with tolerance to find if its the "same" but and I'd always keep a weary eye... it'll just find similar images IMO), then you just have to take a look at their frequency domain counterpart images.
The images with the most detail will have more energy in the high frequencies than the other  less detailed images.On the other hand, strictly for seeing who has the most artifacting, if you've identified images as the "same", the completely horizontal and vertical high frequencies should have lots of energy (by comparison wrt the good image and within the bad image itself) to make all those blocks.Matlab makes it easy to visualize and transform this kind of stuff so take a look at its image processing toolbox or documentation (docs freely available online).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28731671</id>
	<title>Re:use a "difference matte"</title>
	<author>Anonymous</author>
	<datestamp>1247853360000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Ok, Mr. +4 Informative After Effects guy,</p><p>how do you know which image has the higher quality, which one the lower?<br>And how do you repeat the process hundreds of times?</p><p>Must be a real joy spending all day in your favorite Image Compositing Program. No wonder you haven't been sober in months.</p><p>And for you whack moderators out there: Stop modding up comments because it contains a few fancy-sounding words you've heard the cool kids use before!</p><p>adam</p><p>BOXXlabs</p></htmltext>
<tokenext>Ok , Mr. + 4 Informative After Effects guy,how do you know which image has the higher quality , which one the lower ? And how do you repeat the process hundreds of times ? Must be a real joy spending all day in your favorite Image Compositing Program .
No wonder you have n't been sober in months.And for you whack moderators out there : Stop modding up comments because it contains a few fancy-sounding words you 've heard the cool kids use before ! adamBOXXlabs</tokentext>
<sentencetext>Ok, Mr. +4 Informative After Effects guy,how do you know which image has the higher quality, which one the lower?And how do you repeat the process hundreds of times?Must be a real joy spending all day in your favorite Image Compositing Program.
No wonder you haven't been sober in months.And for you whack moderators out there: Stop modding up comments because it contains a few fancy-sounding words you've heard the cool kids use before!adamBOXXlabs</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723719</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723885</id>
	<title>analyze the picture</title>
	<author>Anonymous</author>
	<datestamp>1247744040000</datestamp>
	<modclass>Redundant</modclass>
	<modscore>-1</modscore>
	<htmltext><p>Do a two dimensional fast fourier transform, the one with the highest frequency components retained is the better.As the jpeg standard is based on the fourier transform understanding the standard will probable result  in a faster solution.</p></htmltext>
<tokenext>Do a two dimensional fast fourier transform , the one with the highest frequency components retained is the better.As the jpeg standard is based on the fourier transform understanding the standard will probable result in a faster solution .</tokentext>
<sentencetext>Do a two dimensional fast fourier transform, the one with the highest frequency components retained is the better.As the jpeg standard is based on the fourier transform understanding the standard will probable result  in a faster solution.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724251</id>
	<title>compare against the static baseline.</title>
	<author>circusboy</author>
	<datestamp>1247745840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>compare both images against the original, not each other.<br>count number of pixels different from the original, then calculate max and average difference between either image and the original.</p><p>decide which parameter means more to you.</p><p>go forward from there.</p></htmltext>
<tokenext>compare both images against the original , not each other.count number of pixels different from the original , then calculate max and average difference between either image and the original.decide which parameter means more to you.go forward from there .</tokentext>
<sentencetext>compare both images against the original, not each other.count number of pixels different from the original, then calculate max and average difference between either image and the original.decide which parameter means more to you.go forward from there.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726309</id>
	<title>This is the subject of many studies</title>
	<author>Anonymous</author>
	<datestamp>1247767920000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>This is a very interesting question!!  (excuse me in advance for my english)</p><p>As mentionned in the previous posts, very simple mathematical equations can give you mesures about the quality of an image. For instance, the 3 most popular are:</p><p>- Root mean square<br>- Mean absolute difference<br>- Peak signal to noise ratio</p><p>However, none of these can provide an accurate representation of the artefacts percieved by a human. I'm a student in Image Processing in the University of Sherbrooke. From what I know, there's a lot of researches on "Quality mesurments" (especially one with people from texas University and Universit&#195;&#169; de lyon) from which we expect promising results.</p><p>Until then, you can still use some old tricks. Chop off where it's the less percievable.<br>-  Translate RGB channels in YUV. Chop on the chrominance and keep the luminance. We tend to be more sensible about the latter.<br>-  Chop on High frequencies using a logarithmic filter. We're more sensible to small variations on lower frequencies.</p><p>All of the terms and concepts can be found with a quick search on google / wikipedia.</p><p>Also, take a look at the Jpeg2000 format. It's usage of the wavelet transform leaves a lot less artefacts for a given compression ratio.<br>PGF (progressive grapfic file) is similar to Jpeg2000, a bit faster on compression, leaves few more artefacts.</p><p>However, some old tips are still</p></htmltext>
<tokenext>This is a very interesting question ! !
( excuse me in advance for my english ) As mentionned in the previous posts , very simple mathematical equations can give you mesures about the quality of an image .
For instance , the 3 most popular are : - Root mean square- Mean absolute difference- Peak signal to noise ratioHowever , none of these can provide an accurate representation of the artefacts percieved by a human .
I 'm a student in Image Processing in the University of Sherbrooke .
From what I know , there 's a lot of researches on " Quality mesurments " ( especially one with people from texas University and Universit     de lyon ) from which we expect promising results.Until then , you can still use some old tricks .
Chop off where it 's the less percievable.- Translate RGB channels in YUV .
Chop on the chrominance and keep the luminance .
We tend to be more sensible about the latter.- Chop on High frequencies using a logarithmic filter .
We 're more sensible to small variations on lower frequencies.All of the terms and concepts can be found with a quick search on google / wikipedia.Also , take a look at the Jpeg2000 format .
It 's usage of the wavelet transform leaves a lot less artefacts for a given compression ratio.PGF ( progressive grapfic file ) is similar to Jpeg2000 , a bit faster on compression , leaves few more artefacts.However , some old tips are still</tokentext>
<sentencetext>This is a very interesting question!!
(excuse me in advance for my english)As mentionned in the previous posts, very simple mathematical equations can give you mesures about the quality of an image.
For instance, the 3 most popular are:- Root mean square- Mean absolute difference- Peak signal to noise ratioHowever, none of these can provide an accurate representation of the artefacts percieved by a human.
I'm a student in Image Processing in the University of Sherbrooke.
From what I know, there's a lot of researches on "Quality mesurments" (especially one with people from texas University and UniversitÃ© de lyon) from which we expect promising results.Until then, you can still use some old tricks.
Chop off where it's the less percievable.-  Translate RGB channels in YUV.
Chop on the chrominance and keep the luminance.
We tend to be more sensible about the latter.-  Chop on High frequencies using a logarithmic filter.
We're more sensible to small variations on lower frequencies.All of the terms and concepts can be found with a quick search on google / wikipedia.Also, take a look at the Jpeg2000 format.
It's usage of the wavelet transform leaves a lot less artefacts for a given compression ratio.PGF (progressive grapfic file) is similar to Jpeg2000, a bit faster on compression, leaves few more artefacts.However, some old tips are still</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28732177</id>
	<title>Visipics</title>
	<author>Anonymous</author>
	<datestamp>1247855400000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I've used VisiPics (google it or just add<nobr> <wbr></nobr>.info). It works very well to me. It'll scan the directories you choose, check for duplicate photos, display them (allowing you to compare them), and give you the option to move or delete either or all.</p></htmltext>
<tokenext>I 've used VisiPics ( google it or just add .info ) .
It works very well to me .
It 'll scan the directories you choose , check for duplicate photos , display them ( allowing you to compare them ) , and give you the option to move or delete either or all .</tokentext>
<sentencetext>I've used VisiPics (google it or just add .info).
It works very well to me.
It'll scan the directories you choose, check for duplicate photos, display them (allowing you to compare them), and give you the option to move or delete either or all.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28734829</id>
	<title>Re:File size</title>
	<author>Anonymous</author>
	<datestamp>1247823960000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>This won't work, it's quite easy to set higher quality JPEG settings in later save iterations which may mean that the larger files are the less pristine of a set.</p></htmltext>
<tokenext>This wo n't work , it 's quite easy to set higher quality JPEG settings in later save iterations which may mean that the larger files are the less pristine of a set .</tokentext>
<sentencetext>This won't work, it's quite easy to set higher quality JPEG settings in later save iterations which may mean that the larger files are the less pristine of a set.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725545</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724453</id>
	<title>Image sharpness measuring?</title>
	<author>Anonymous</author>
	<datestamp>1247747280000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Replying to your post to create a new sub-thread, hope you don't mind as I think it involves similar research...</p><p>Often when I look at digital photos taken at a camera's maximum megapixel range, or even scans of negatives, or random pictures on the interwebs, I find them to be rather blurry; not necessarily out-of-focus, but simply 'soft'.</p><p>Essentially.. there's more information being used to store the image 'as is' than there is casually useful* information -in- the image.</p><p>Does anybody know of software, or algorithms, to figure out how much casually useful information is in a picture, and at what size (dimensions) that picture would optimally be stored?</p><p>* by 'casually useful' I mean this... take today's APOD image:<br><a href="http://antwrp.gsfc.nasa.gov/apod/ap090716.html" title="nasa.gov" rel="nofollow">http://antwrp.gsfc.nasa.gov/apod/ap090716.html</a> [nasa.gov] ( view full - sparing their bandwidth by not linking to it, though I'm sure they have plenty )<br>That image to me, the casual user, looks blurry.  Ever single pixel within it (and beyond from the original) is probably very important to the scientists; being able to run some algorithms on it to get every last bit of information from it.  But when I look at it, I see the smallest 'feature' in it as being maybe 3-4 pixels across, let's say 4.  So if I downsize it to 25\% of the full size image, it looks perfectly sharp to me without any significant (to me, the casual user) loss of information.<nobr> <wbr></nobr>/anon</p></htmltext>
<tokenext>Replying to your post to create a new sub-thread , hope you do n't mind as I think it involves similar research...Often when I look at digital photos taken at a camera 's maximum megapixel range , or even scans of negatives , or random pictures on the interwebs , I find them to be rather blurry ; not necessarily out-of-focus , but simply 'soft'.Essentially.. there 's more information being used to store the image 'as is ' than there is casually useful * information -in- the image.Does anybody know of software , or algorithms , to figure out how much casually useful information is in a picture , and at what size ( dimensions ) that picture would optimally be stored ?
* by 'casually useful ' I mean this... take today 's APOD image : http : //antwrp.gsfc.nasa.gov/apod/ap090716.html [ nasa.gov ] ( view full - sparing their bandwidth by not linking to it , though I 'm sure they have plenty ) That image to me , the casual user , looks blurry .
Ever single pixel within it ( and beyond from the original ) is probably very important to the scientists ; being able to run some algorithms on it to get every last bit of information from it .
But when I look at it , I see the smallest 'feature ' in it as being maybe 3-4 pixels across , let 's say 4 .
So if I downsize it to 25 \ % of the full size image , it looks perfectly sharp to me without any significant ( to me , the casual user ) loss of information .
/anon</tokentext>
<sentencetext>Replying to your post to create a new sub-thread, hope you don't mind as I think it involves similar research...Often when I look at digital photos taken at a camera's maximum megapixel range, or even scans of negatives, or random pictures on the interwebs, I find them to be rather blurry; not necessarily out-of-focus, but simply 'soft'.Essentially.. there's more information being used to store the image 'as is' than there is casually useful* information -in- the image.Does anybody know of software, or algorithms, to figure out how much casually useful information is in a picture, and at what size (dimensions) that picture would optimally be stored?
* by 'casually useful' I mean this... take today's APOD image:http://antwrp.gsfc.nasa.gov/apod/ap090716.html [nasa.gov] ( view full - sparing their bandwidth by not linking to it, though I'm sure they have plenty )That image to me, the casual user, looks blurry.
Ever single pixel within it (and beyond from the original) is probably very important to the scientists; being able to run some algorithms on it to get every last bit of information from it.
But when I look at it, I see the smallest 'feature' in it as being maybe 3-4 pixels across, let's say 4.
So if I downsize it to 25\% of the full size image, it looks perfectly sharp to me without any significant (to me, the casual user) loss of information.
/anon</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723813</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723571</id>
	<title>Choose the largest file size.</title>
	<author>Anonymous</author>
	<datestamp>1247742540000</datestamp>
	<modclass>Redundant</modclass>
	<modscore>-1</modscore>
	<htmltext><p>Not perfect but it should be right in most cases.</p></htmltext>
<tokenext>Not perfect but it should be right in most cases .</tokentext>
<sentencetext>Not perfect but it should be right in most cases.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28728001</id>
	<title>Interpolative Comparison</title>
	<author>Anonymous</author>
	<datestamp>1247836380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>You could write a little app to interpolate across spaces in the JPEG and then compare the resulting differences from interpolated and actual data for each JPEG image.  Assumable, the image with more JPEG compression artifacts will have a higher (on average) difference between interpolated values and actual values because of the random artifacts which will throw off interpolation.</p><p>How finely grained your interpolation needs to be may be something you will have to experiment with... but I think this should work fairly reliably in theory.</p></htmltext>
<tokenext>You could write a little app to interpolate across spaces in the JPEG and then compare the resulting differences from interpolated and actual data for each JPEG image .
Assumable , the image with more JPEG compression artifacts will have a higher ( on average ) difference between interpolated values and actual values because of the random artifacts which will throw off interpolation.How finely grained your interpolation needs to be may be something you will have to experiment with... but I think this should work fairly reliably in theory .</tokentext>
<sentencetext>You could write a little app to interpolate across spaces in the JPEG and then compare the resulting differences from interpolated and actual data for each JPEG image.
Assumable, the image with more JPEG compression artifacts will have a higher (on average) difference between interpolated values and actual values because of the random artifacts which will throw off interpolation.How finely grained your interpolation needs to be may be something you will have to experiment with... but I think this should work fairly reliably in theory.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723595</id>
	<title>Re:AI problem?</title>
	<author>Anonymous</author>
	<datestamp>1247742600000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Use a neural network. You can train the network by presenting it with high-quality photos, and their deteriorated versions.</p></htmltext>
<tokenext>Use a neural network .
You can train the network by presenting it with high-quality photos , and their deteriorated versions .</tokentext>
<sentencetext>Use a neural network.
You can train the network by presenting it with high-quality photos, and their deteriorated versions.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723509</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725969</id>
	<title>Re</title>
	<author>Anonymous</author>
	<datestamp>1247763180000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>There was a story a while back of a programer that worked with the quantization field or something to tell if a photo had been photoshoped, how many layers, and by what program EVEN if the file had been reencoded and compressed.  Google "Krawetz's software."  He used it to show Al Qaeda's videos were manipulated.</p></htmltext>
<tokenext>There was a story a while back of a programer that worked with the quantization field or something to tell if a photo had been photoshoped , how many layers , and by what program EVEN if the file had been reencoded and compressed .
Google " Krawetz 's software .
" He used it to show Al Qaeda 's videos were manipulated .</tokentext>
<sentencetext>There was a story a while back of a programer that worked with the quantization field or something to tell if a photo had been photoshoped, how many layers, and by what program EVEN if the file had been reencoded and compressed.
Google "Krawetz's software.
"  He used it to show Al Qaeda's videos were manipulated.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723511</id>
	<title>Easy</title>
	<author>Anonymous</author>
	<datestamp>1247742300000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>Paste both images in your image editor of choice, one layer on top of each other, apply a difference/subtraction filter.</p></htmltext>
<tokenext>Paste both images in your image editor of choice , one layer on top of each other , apply a difference/subtraction filter .</tokentext>
<sentencetext>Paste both images in your image editor of choice, one layer on top of each other, apply a difference/subtraction filter.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723539</id>
	<title>I'm not an expert</title>
	<author>Anonymous</author>
	<datestamp>1247742420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>But what if you saved both images in an uncompressed format (bmp?), then compressed them both using a lossless format (gzip?), and compared the file sizes...

<p>Do it with a bunch of images, and I expect you'll discover that the low-quality-gzipped image will be smaller than the high-quality-gzipped image...

</p><p>Maybe?  *shrug*</p></htmltext>
<tokenext>But what if you saved both images in an uncompressed format ( bmp ?
) , then compressed them both using a lossless format ( gzip ?
) , and compared the file sizes.. . Do it with a bunch of images , and I expect you 'll discover that the low-quality-gzipped image will be smaller than the high-quality-gzipped image.. . Maybe ? * shrug *</tokentext>
<sentencetext>But what if you saved both images in an uncompressed format (bmp?
), then compressed them both using a lossless format (gzip?
), and compared the file sizes...

Do it with a bunch of images, and I expect you'll discover that the low-quality-gzipped image will be smaller than the high-quality-gzipped image...

Maybe?  *shrug*</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726381</id>
	<title>Re:AI problem?</title>
	<author>TheSpoom</author>
	<datestamp>1247768760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>&lt;/question&gt;</htmltext>
<tokenext></tokentext>
<sentencetext></sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724977</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724925</id>
	<title>Re:Easy</title>
	<author>dainichi</author>
	<datestamp>1247750880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>To 28000 images??? even a group of trained monkeys would start revolting.</htmltext>
<tokenext>To 28000 images ? ? ?
even a group of trained monkeys would start revolting .</tokentext>
<sentencetext>To 28000 images???
even a group of trained monkeys would start revolting.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723511</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726131</id>
	<title>thanks for the serious consideration here</title>
	<author>kpoole55</author>
	<datestamp>1247765520000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Thanks to the many who took this as a serious question and didn't turn this into a "It's just pr0n so who cares."  Some is pr0n, some isn't, the most consistent thing is humor.</p><p>Many ideas needed the original image to find the better quality of the copy and some asked where I get these images from.  These are linked in that I get the images from the USENET, from forums and from artists' galleries.  This means that there's only a small set, from the artists' galleries, that I know are original.  Others may be original but it may not be the original that comes to me first.  On occasion, an artist may even publish the same image in different forms depending on the limitations of the different forums he frequents.</p><p>There were some ideas that were nicely different from the directions I was following that they'll give me more to think about.</p><p>I'll also acknowledge those who said that how the image is represented is less important than what the image represents.  That's quite true but if I have a machine that can find the best representation of something I enjoy then why not use it.</p></htmltext>
<tokenext>Thanks to the many who took this as a serious question and did n't turn this into a " It 's just pr0n so who cares .
" Some is pr0n , some is n't , the most consistent thing is humor.Many ideas needed the original image to find the better quality of the copy and some asked where I get these images from .
These are linked in that I get the images from the USENET , from forums and from artists ' galleries .
This means that there 's only a small set , from the artists ' galleries , that I know are original .
Others may be original but it may not be the original that comes to me first .
On occasion , an artist may even publish the same image in different forms depending on the limitations of the different forums he frequents.There were some ideas that were nicely different from the directions I was following that they 'll give me more to think about.I 'll also acknowledge those who said that how the image is represented is less important than what the image represents .
That 's quite true but if I have a machine that can find the best representation of something I enjoy then why not use it .</tokentext>
<sentencetext>Thanks to the many who took this as a serious question and didn't turn this into a "It's just pr0n so who cares.
"  Some is pr0n, some isn't, the most consistent thing is humor.Many ideas needed the original image to find the better quality of the copy and some asked where I get these images from.
These are linked in that I get the images from the USENET, from forums and from artists' galleries.
This means that there's only a small set, from the artists' galleries, that I know are original.
Others may be original but it may not be the original that comes to me first.
On occasion, an artist may even publish the same image in different forms depending on the limitations of the different forums he frequents.There were some ideas that were nicely different from the directions I was following that they'll give me more to think about.I'll also acknowledge those who said that how the image is represented is less important than what the image represents.
That's quite true but if I have a machine that can find the best representation of something I enjoy then why not use it.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723789</id>
	<title>Re:File size</title>
	<author>Anonymous</author>
	<datestamp>1247743560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>That's a good first order approximation, but if you're collecting your images from the internet you'll find that sometimes someone will save a low-quality jpeg image at higher quality. Some ancient browsers used to save all images as bmps, then that image might get converted to a jpeg later, using a quality setting that doesn't match the original. The artifacts will still be there but the file size will not reflect that.</p></htmltext>
<tokenext>That 's a good first order approximation , but if you 're collecting your images from the internet you 'll find that sometimes someone will save a low-quality jpeg image at higher quality .
Some ancient browsers used to save all images as bmps , then that image might get converted to a jpeg later , using a quality setting that does n't match the original .
The artifacts will still be there but the file size will not reflect that .</tokentext>
<sentencetext>That's a good first order approximation, but if you're collecting your images from the internet you'll find that sometimes someone will save a low-quality jpeg image at higher quality.
Some ancient browsers used to save all images as bmps, then that image might get converted to a jpeg later, using a quality setting that doesn't match the original.
The artifacts will still be there but the file size will not reflect that.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723707</id>
	<title>Re:File size</title>
	<author>Anonymous</author>
	<datestamp>1247743140000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>File size doesn't tell you anything.  If I take a picture with a bunch of noise (eg. poor lighting) in it then it will not compress as well.  If I take the same picture with perfect lighting it might be higher quality but smaller file size.</p><p>Why this is modded up, I don't know.  Too many morons out there.</p></htmltext>
<tokenext>File size does n't tell you anything .
If I take a picture with a bunch of noise ( eg .
poor lighting ) in it then it will not compress as well .
If I take the same picture with perfect lighting it might be higher quality but smaller file size.Why this is modded up , I do n't know .
Too many morons out there .</tokentext>
<sentencetext>File size doesn't tell you anything.
If I take a picture with a bunch of noise (eg.
poor lighting) in it then it will not compress as well.
If I take the same picture with perfect lighting it might be higher quality but smaller file size.Why this is modded up, I don't know.
Too many morons out there.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724561</id>
	<title>It depends what you want..</title>
	<author>Paracelcus</author>
	<datestamp>1247748060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>find dupes on the internet <a href="http://tineye.com/" title="tineye.com">http://tineye.com/</a> [tineye.com]<br>find dupes on your HDD <a href="http://www.bigbangenterprises.de/en/doublekiller/" title="bigbangenterprises.de">http://www.bigbangenterprises.de/en/doublekiller/</a> [bigbangenterprises.de]</p></htmltext>
<tokenext>find dupes on the internet http : //tineye.com/ [ tineye.com ] find dupes on your HDD http : //www.bigbangenterprises.de/en/doublekiller/ [ bigbangenterprises.de ]</tokentext>
<sentencetext>find dupes on the internet http://tineye.com/ [tineye.com]find dupes on your HDD http://www.bigbangenterprises.de/en/doublekiller/ [bigbangenterprises.de]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724109</id>
	<title>Re:Measure sharpness?</title>
	<author>uhmmmm</author>
	<datestamp>1247745120000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>Even faster is look at the DCT coefficients in the file itself.  Doesn't even require decoding - JPEG compression works by quantizing the coefficients more heavily for higher compression rates, and particularly for the high frequency coefficients.  If more high frequency coefficients are zero, it's been quantized more heavily, and is lower quality.</p><p>Now, it's not foolproof.  If one copy went through some intermediate processing (color dithering or something) before the final JPEG version was saved, it may have lost quality in places not accounted for by this method.  Comparing quality of two differently-sized images is also not as straightforward either.</p></htmltext>
<tokenext>Even faster is look at the DCT coefficients in the file itself .
Does n't even require decoding - JPEG compression works by quantizing the coefficients more heavily for higher compression rates , and particularly for the high frequency coefficients .
If more high frequency coefficients are zero , it 's been quantized more heavily , and is lower quality.Now , it 's not foolproof .
If one copy went through some intermediate processing ( color dithering or something ) before the final JPEG version was saved , it may have lost quality in places not accounted for by this method .
Comparing quality of two differently-sized images is also not as straightforward either .</tokentext>
<sentencetext>Even faster is look at the DCT coefficients in the file itself.
Doesn't even require decoding - JPEG compression works by quantizing the coefficients more heavily for higher compression rates, and particularly for the high frequency coefficients.
If more high frequency coefficients are zero, it's been quantized more heavily, and is lower quality.Now, it's not foolproof.
If one copy went through some intermediate processing (color dithering or something) before the final JPEG version was saved, it may have lost quality in places not accounted for by this method.
Comparing quality of two differently-sized images is also not as straightforward either.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723693</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726097</id>
	<title>Compression is just one factor</title>
	<author>Chris Pimlott</author>
	<datestamp>1247765160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>While important, compression isn't the only issue.  You'll also have to consider issues such as resolution, cropping, noise, blurriness, color balance, white level... especially if you're dealing with non-digital sources.  I went through a phase of collecting scans of HR Giger works and came across all sorts of subjective issues.  One scan might be extremely high res but cuts off the edges.  Another might be blurry but have more accurate colors (compared to low-res images from the artist's official sites).  Many times I ended up keeping multiple images since I couldn't find a single one reproducing everything faithfully.</p></htmltext>
<tokenext>While important , compression is n't the only issue .
You 'll also have to consider issues such as resolution , cropping , noise , blurriness , color balance , white level... especially if you 're dealing with non-digital sources .
I went through a phase of collecting scans of HR Giger works and came across all sorts of subjective issues .
One scan might be extremely high res but cuts off the edges .
Another might be blurry but have more accurate colors ( compared to low-res images from the artist 's official sites ) .
Many times I ended up keeping multiple images since I could n't find a single one reproducing everything faithfully .</tokentext>
<sentencetext>While important, compression isn't the only issue.
You'll also have to consider issues such as resolution, cropping, noise, blurriness, color balance, white level... especially if you're dealing with non-digital sources.
I went through a phase of collecting scans of HR Giger works and came across all sorts of subjective issues.
One scan might be extremely high res but cuts off the edges.
Another might be blurry but have more accurate colors (compared to low-res images from the artist's official sites).
Many times I ended up keeping multiple images since I couldn't find a single one reproducing everything faithfully.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726863</id>
	<title>Re:File size</title>
	<author>beelsebob</author>
	<datestamp>1247863200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Yes, but it will have far fewer jpeg artifacts as well.  The quality loss will all be gif artifacts.</p></htmltext>
<tokenext>Yes , but it will have far fewer jpeg artifacts as well .
The quality loss will all be gif artifacts .</tokentext>
<sentencetext>Yes, but it will have far fewer jpeg artifacts as well.
The quality loss will all be gif artifacts.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723651</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726175</id>
	<title>Re:Easy</title>
	<author>Anonymous</author>
	<datestamp>1247766180000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>How bout looking for the most high-frequency response, basically meaning that the images are sharper (although this will more likely tell you that it's closer to the original copy, rather than better quality, as someone could later apply noise reduction to fix it up.)</p></htmltext>
<tokenext>How bout looking for the most high-frequency response , basically meaning that the images are sharper ( although this will more likely tell you that it 's closer to the original copy , rather than better quality , as someone could later apply noise reduction to fix it up .
)</tokentext>
<sentencetext>How bout looking for the most high-frequency response, basically meaning that the images are sharper (although this will more likely tell you that it's closer to the original copy, rather than better quality, as someone could later apply noise reduction to fix it up.
)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723511</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723567</id>
	<title>The Human Eye</title>
	<author>Anonymous</author>
	<datestamp>1247742540000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Artifacts are something visible to us - they mean nothing to software. It doesn't know wether the pixels are intentionally colored that way (ie, detail) or colored that way through some compression process at some point in time (ie, artifacts) or something else (eg, ditherting, color depth, banding, etc). If two images are compressed at vastly different ratios, you'll be able to tell easily. Otherwise, they're probably both at a default 90\% and if you can't tell the difference, whats the problem?</p></htmltext>
<tokenext>Artifacts are something visible to us - they mean nothing to software .
It does n't know wether the pixels are intentionally colored that way ( ie , detail ) or colored that way through some compression process at some point in time ( ie , artifacts ) or something else ( eg , ditherting , color depth , banding , etc ) .
If two images are compressed at vastly different ratios , you 'll be able to tell easily .
Otherwise , they 're probably both at a default 90 \ % and if you ca n't tell the difference , whats the problem ?</tokentext>
<sentencetext>Artifacts are something visible to us - they mean nothing to software.
It doesn't know wether the pixels are intentionally colored that way (ie, detail) or colored that way through some compression process at some point in time (ie, artifacts) or something else (eg, ditherting, color depth, banding, etc).
If two images are compressed at vastly different ratios, you'll be able to tell easily.
Otherwise, they're probably both at a default 90\% and if you can't tell the difference, whats the problem?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726873</id>
	<title>Compare them mathematically</title>
	<author>Anonymous</author>
	<datestamp>1247863380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Matlab. Though you need to have the original picture to compare. One thing is though that mathematical difference does not correlate with image quality. By reducing the resolution of the chrominance channels (e.g. half resolution for color, full resolution for luminance), you can get a much smaller image, and you cannot easily see the difference. So image quality is always subjective.</p></htmltext>
<tokenext>Matlab .
Though you need to have the original picture to compare .
One thing is though that mathematical difference does not correlate with image quality .
By reducing the resolution of the chrominance channels ( e.g .
half resolution for color , full resolution for luminance ) , you can get a much smaller image , and you can not easily see the difference .
So image quality is always subjective .</tokentext>
<sentencetext>Matlab.
Though you need to have the original picture to compare.
One thing is though that mathematical difference does not correlate with image quality.
By reducing the resolution of the chrominance channels (e.g.
half resolution for color, full resolution for luminance), you can get a much smaller image, and you cannot easily see the difference.
So image quality is always subjective.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724025</id>
	<title>Re:AI problem?</title>
	<author>Anonymous</author>
	<datestamp>1247744760000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>You're right, it needs to be done by humans to be sure.</p><p>Amazon's Mechanical Turk should do the trick.</p><p><a href="https://www.mturk.com/mturk/welcome" title="mturk.com">https://www.mturk.com/mturk/welcome</a> [mturk.com]</p></htmltext>
<tokenext>You 're right , it needs to be done by humans to be sure.Amazon 's Mechanical Turk should do the trick.https : //www.mturk.com/mturk/welcome [ mturk.com ]</tokentext>
<sentencetext>You're right, it needs to be done by humans to be sure.Amazon's Mechanical Turk should do the trick.https://www.mturk.com/mturk/welcome [mturk.com]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723509</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724367</id>
	<title>Is there a way to find out the compression engine?</title>
	<author>ID000001</author>
	<datestamp>1247746680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Does JPEG header have the compression method listed as well as compression ratio? If not, is there any way to figure out what kinda compresison engine is used base on how an image is constructed?
<br> <br>
If so, simply do some testing against some of the most popular compression engine base on the artifact to determines what engine is used, then find out their compression ratio (perhaps a simple files size might work?). Then simply pick the images with the best quality base on engine used and ratio?</htmltext>
<tokenext>Does JPEG header have the compression method listed as well as compression ratio ?
If not , is there any way to figure out what kinda compresison engine is used base on how an image is constructed ?
If so , simply do some testing against some of the most popular compression engine base on the artifact to determines what engine is used , then find out their compression ratio ( perhaps a simple files size might work ? ) .
Then simply pick the images with the best quality base on engine used and ratio ?</tokentext>
<sentencetext>Does JPEG header have the compression method listed as well as compression ratio?
If not, is there any way to figure out what kinda compresison engine is used base on how an image is constructed?
If so, simply do some testing against some of the most popular compression engine base on the artifact to determines what engine is used, then find out their compression ratio (perhaps a simple files size might work?).
Then simply pick the images with the best quality base on engine used and ratio?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725231</id>
	<title>Re:File size</title>
	<author>RJFerret</author>
	<datestamp>1247753700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Sub-sampling will also totally throw off file size (which I adjust all the time depending on image content).</p><p>But...how about this?  Re-compress both images to your lowest typical level.  The one that's changed the greatest will be the highest quality, have the most detail and dynamic range, without time consuming visual inspection.</p><p>I just tried it and found this method superficially effective at least.</p><p>In the future, use better file naming notes!  (My originals are Name00.jpg, first gen are Name01.jpg, radical changes go to Name10.jpg and weirdness can even be accommodated with Name10silo.jpg or Name10-512.jpg for resizes.)  They also sort in sequential order in file requesters for easy work flow/processing.</p><p>-Randy</p></htmltext>
<tokenext>Sub-sampling will also totally throw off file size ( which I adjust all the time depending on image content ) .But...how about this ?
Re-compress both images to your lowest typical level .
The one that 's changed the greatest will be the highest quality , have the most detail and dynamic range , without time consuming visual inspection.I just tried it and found this method superficially effective at least.In the future , use better file naming notes !
( My originals are Name00.jpg , first gen are Name01.jpg , radical changes go to Name10.jpg and weirdness can even be accommodated with Name10silo.jpg or Name10-512.jpg for resizes .
) They also sort in sequential order in file requesters for easy work flow/processing.-Randy</tokentext>
<sentencetext>Sub-sampling will also totally throw off file size (which I adjust all the time depending on image content).But...how about this?
Re-compress both images to your lowest typical level.
The one that's changed the greatest will be the highest quality, have the most detail and dynamic range, without time consuming visual inspection.I just tried it and found this method superficially effective at least.In the future, use better file naming notes!
(My originals are Name00.jpg, first gen are Name01.jpg, radical changes go to Name10.jpg and weirdness can even be accommodated with Name10silo.jpg or Name10-512.jpg for resizes.
)  They also sort in sequential order in file requesters for easy work flow/processing.-Randy</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725381</id>
	<title>Cisco?</title>
	<author>Anonymous</author>
	<datestamp>1247755140000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Um, Cisco has catastrophic layoffs today and this couldn't have waited until later?</p></htmltext>
<tokenext>Um , Cisco has catastrophic layoffs today and this could n't have waited until later ?</tokentext>
<sentencetext>Um, Cisco has catastrophic layoffs today and this couldn't have waited until later?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724701</id>
	<title>Possible Method...</title>
	<author>teko\_teko</author>
	<datestamp>1247749020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I just thought of a possible way to compare...</p><p>Assuming both JPEG aren't at the lowest (or very low) quality:</p><p>1. Take image A, create 10 or 20 more copies using different levels of quality (5, 10, 15, and so on).<br>2. Compare each of them with image A, from lowest to highest quality.<br>3. Stop where the diff no longer change with the previous image, then we can assume image A is at the previous image's quality level.</p><p>Do the same with image B.</p></htmltext>
<tokenext>I just thought of a possible way to compare...Assuming both JPEG are n't at the lowest ( or very low ) quality : 1 .
Take image A , create 10 or 20 more copies using different levels of quality ( 5 , 10 , 15 , and so on ) .2 .
Compare each of them with image A , from lowest to highest quality.3 .
Stop where the diff no longer change with the previous image , then we can assume image A is at the previous image 's quality level.Do the same with image B .</tokentext>
<sentencetext>I just thought of a possible way to compare...Assuming both JPEG aren't at the lowest (or very low) quality:1.
Take image A, create 10 or 20 more copies using different levels of quality (5, 10, 15, and so on).2.
Compare each of them with image A, from lowest to highest quality.3.
Stop where the diff no longer change with the previous image, then we can assume image A is at the previous image's quality level.Do the same with image B.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723667</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28728247</id>
	<title>compare with a lower quality image</title>
	<author>Anonymous</author>
	<datestamp>1247838300000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>It seems to me it would work best if you had something to compare it to<br>since you don't have the original, how about looking at it from this point of view.<br>take each of the two images and reprocess them with the lowest quality of jpeg (producing the most artifacts) and see which original image is closer to its reprocessed image.<br>the other one should then be the highest quality.</p></htmltext>
<tokenext>It seems to me it would work best if you had something to compare it tosince you do n't have the original , how about looking at it from this point of view.take each of the two images and reprocess them with the lowest quality of jpeg ( producing the most artifacts ) and see which original image is closer to its reprocessed image.the other one should then be the highest quality .</tokentext>
<sentencetext>It seems to me it would work best if you had something to compare it tosince you don't have the original, how about looking at it from this point of view.take each of the two images and reprocess them with the lowest quality of jpeg (producing the most artifacts) and see which original image is closer to its reprocessed image.the other one should then be the highest quality.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724113</id>
	<title>tineye?</title>
	<author>E IS mC(Square)</author>
	<datestamp>1247745120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Check out Tineye - <a href="http://tineye.com/faq" title="tineye.com">http://tineye.com/faq</a> [tineye.com]
<br> <br>
It does not do exactly what above post suggests, but it partially does what submitter asked (finding similar images on the net).</htmltext>
<tokenext>Check out Tineye - http : //tineye.com/faq [ tineye.com ] It does not do exactly what above post suggests , but it partially does what submitter asked ( finding similar images on the net ) .</tokentext>
<sentencetext>Check out Tineye - http://tineye.com/faq [tineye.com]
 
It does not do exactly what above post suggests, but it partially does what submitter asked (finding similar images on the net).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723565</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724533</id>
	<title>Re:DCT</title>
	<author>Anonymous</author>
	<datestamp>1247747820000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>Or just take the 2D FFT of the entire images.  Higher JPEG compression should result in fewer high frequency components in an image.</p></htmltext>
<tokenext>Or just take the 2D FFT of the entire images .
Higher JPEG compression should result in fewer high frequency components in an image .</tokentext>
<sentencetext>Or just take the 2D FFT of the entire images.
Higher JPEG compression should result in fewer high frequency components in an image.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723695</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28727789</id>
	<title>Re:AI problem?</title>
	<author>gnasher719</author>
	<datestamp>1247834220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Even simpler mathematical analysis would include such techniques as seeing which one takes up more disk space. Last I checked, that was very highly correlated with compression level.</p></div><p>And it would often be completely wrong because it doesn't take into account that some people re-encode images again. Like an image could be compressed to 100 KB in JPEG, then become a 4 MB BMP image, then compressed to 500 KB JPEG. I doubt it will look better than the same image, compressed directly to 200 KB.</p></div>
	</htmltext>
<tokenext>Even simpler mathematical analysis would include such techniques as seeing which one takes up more disk space .
Last I checked , that was very highly correlated with compression level.And it would often be completely wrong because it does n't take into account that some people re-encode images again .
Like an image could be compressed to 100 KB in JPEG , then become a 4 MB BMP image , then compressed to 500 KB JPEG .
I doubt it will look better than the same image , compressed directly to 200 KB .</tokentext>
<sentencetext>Even simpler mathematical analysis would include such techniques as seeing which one takes up more disk space.
Last I checked, that was very highly correlated with compression level.And it would often be completely wrong because it doesn't take into account that some people re-encode images again.
Like an image could be compressed to 100 KB in JPEG, then become a 4 MB BMP image, then compressed to 500 KB JPEG.
I doubt it will look better than the same image, compressed directly to 200 KB.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724765</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724261</id>
	<title>Re:AI problem?</title>
	<author>Xenographic</author>
	<datestamp>1247745960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>&gt; How about Amazon's Mechanical Turk service?</p><p>He might not want everyone looking at his porn collection?</p><p>Also, you'd have to scan every pair of images for dupes, which changes the complexity from N to N*log(N).  Moreover, that relies on humans and some people have no idea which image is higher quality.  Not everyone even understands what a compression artifact is.  Such people won't give you useful answers.</p><p>In his situation, I'd probably run the dupe finder program, then examine all the duplicates personally.  There can't be *that* many... right?</p></htmltext>
<tokenext>&gt; How about Amazon 's Mechanical Turk service ? He might not want everyone looking at his porn collection ? Also , you 'd have to scan every pair of images for dupes , which changes the complexity from N to N * log ( N ) .
Moreover , that relies on humans and some people have no idea which image is higher quality .
Not everyone even understands what a compression artifact is .
Such people wo n't give you useful answers.In his situation , I 'd probably run the dupe finder program , then examine all the duplicates personally .
There ca n't be * that * many... right ?</tokentext>
<sentencetext>&gt; How about Amazon's Mechanical Turk service?He might not want everyone looking at his porn collection?Also, you'd have to scan every pair of images for dupes, which changes the complexity from N to N*log(N).
Moreover, that relies on humans and some people have no idea which image is higher quality.
Not everyone even understands what a compression artifact is.
Such people won't give you useful answers.In his situation, I'd probably run the dupe finder program, then examine all the duplicates personally.
There can't be *that* many... right?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723591</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723813</id>
	<title>image quality measures</title>
	<author>trb</author>
	<datestamp>1247743680000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext>google (or scholar-google) for Hosaka plots, or image quality measures.  Ref:
<p>
HOSAKA K., A new picture quality evaluation method.<br>Proc. International Picture Coding Symposium, Tokyo, Japan, 1986, 17-18.</p></htmltext>
<tokenext>google ( or scholar-google ) for Hosaka plots , or image quality measures .
Ref : HOSAKA K. , A new picture quality evaluation method.Proc .
International Picture Coding Symposium , Tokyo , Japan , 1986 , 17-18 .</tokentext>
<sentencetext>google (or scholar-google) for Hosaka plots, or image quality measures.
Ref:

HOSAKA K., A new picture quality evaluation method.Proc.
International Picture Coding Symposium, Tokyo, Japan, 1986, 17-18.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724137</id>
	<title>Re:File size</title>
	<author>PitaBred</author>
	<datestamp>1247745180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>But if they're duplicate pictures (some kind of matching heuristic), then file size most certainly IS appropriate. You're starting from the same point, choosing the result with less lost during compression, and therefore larger, would be quite logical.</htmltext>
<tokenext>But if they 're duplicate pictures ( some kind of matching heuristic ) , then file size most certainly IS appropriate .
You 're starting from the same point , choosing the result with less lost during compression , and therefore larger , would be quite logical .</tokentext>
<sentencetext>But if they're duplicate pictures (some kind of matching heuristic), then file size most certainly IS appropriate.
You're starting from the same point, choosing the result with less lost during compression, and therefore larger, would be quite logical.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723707</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723721</id>
	<title>find the edges? but size is useful and easy?</title>
	<author>with a 'c'</author>
	<datestamp>1247743200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Assuming you can find similar images programmatically you can probably use size to get a good guess. Alternately I know there are algorithms to find edges. Edges are where most jpeg artifacts show up. If you could then look at the gradient from the edges smooth ones will likely be the better image.</htmltext>
<tokenext>Assuming you can find similar images programmatically you can probably use size to get a good guess .
Alternately I know there are algorithms to find edges .
Edges are where most jpeg artifacts show up .
If you could then look at the gradient from the edges smooth ones will likely be the better image .</tokentext>
<sentencetext>Assuming you can find similar images programmatically you can probably use size to get a good guess.
Alternately I know there are algorithms to find edges.
Edges are where most jpeg artifacts show up.
If you could then look at the gradient from the edges smooth ones will likely be the better image.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28729303</id>
	<title>Re:Measure sharpness?</title>
	<author>b4dc0d3r</author>
	<datestamp>1247843280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>In theory that would work, but you have to consider the data source.  If you're talking about images from the net, this won't cut it.  Lots of times the images will be downloaded and re-saved using higher quality settings.  The result is double compression and the lesser quality image has the higher settings.</p></htmltext>
<tokenext>In theory that would work , but you have to consider the data source .
If you 're talking about images from the net , this wo n't cut it .
Lots of times the images will be downloaded and re-saved using higher quality settings .
The result is double compression and the lesser quality image has the higher settings .</tokentext>
<sentencetext>In theory that would work, but you have to consider the data source.
If you're talking about images from the net, this won't cut it.
Lots of times the images will be downloaded and re-saved using higher quality settings.
The result is double compression and the lesser quality image has the higher settings.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724109</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28727377</id>
	<title>Welcome to Pattern Recognition</title>
	<author>Yamavu</author>
	<datestamp>1247828400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I take it that you want to extract and compare features of the actual jpeg image, regardless of quality. There are many ways to do that and none of them includes filesize comparisons or the like.

You could look in the JPEG Standard and try to filter out compression by just reading the base of every 8x8 block (that's the one that shouldn't be compressed) and compare these values for similarity.

However you should aim for more advanced image recognition and comparison algorithms, for example the ones used on TinEye. Most of these algorithms come from the field of AI, but they're quite simple generally.</htmltext>
<tokenext>I take it that you want to extract and compare features of the actual jpeg image , regardless of quality .
There are many ways to do that and none of them includes filesize comparisons or the like .
You could look in the JPEG Standard and try to filter out compression by just reading the base of every 8x8 block ( that 's the one that should n't be compressed ) and compare these values for similarity .
However you should aim for more advanced image recognition and comparison algorithms , for example the ones used on TinEye .
Most of these algorithms come from the field of AI , but they 're quite simple generally .</tokentext>
<sentencetext>I take it that you want to extract and compare features of the actual jpeg image, regardless of quality.
There are many ways to do that and none of them includes filesize comparisons or the like.
You could look in the JPEG Standard and try to filter out compression by just reading the base of every 8x8 block (that's the one that shouldn't be compressed) and compare these values for similarity.
However you should aim for more advanced image recognition and comparison algorithms, for example the ones used on TinEye.
Most of these algorithms come from the field of AI, but they're quite simple generally.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724883</id>
	<title>Expert's answer</title>
	<author>mezis</author>
	<datestamp>1247750580000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>Exploit JPEG's weakness.
<br> <br>

JPEG encodes pixels by using a cosine transform on 8x8 pixel blocks. The most perceptually visible artifacts (and the artifacts most suceptible to cause troble to machine vision algorithms) appear on block boundaries.
<br> <br>

Short answer:<br>
a. 2D-FFT your image<br>
b. Use the value of the 8-pixel period response in X and Y direction as your quality metric. The higher, the worse the quality.<br>
<br>
This is a crude 1st approximation but works.</htmltext>
<tokenext>Exploit JPEG 's weakness .
JPEG encodes pixels by using a cosine transform on 8x8 pixel blocks .
The most perceptually visible artifacts ( and the artifacts most suceptible to cause troble to machine vision algorithms ) appear on block boundaries .
Short answer : a .
2D-FFT your image b. Use the value of the 8-pixel period response in X and Y direction as your quality metric .
The higher , the worse the quality .
This is a crude 1st approximation but works .</tokentext>
<sentencetext>Exploit JPEG's weakness.
JPEG encodes pixels by using a cosine transform on 8x8 pixel blocks.
The most perceptually visible artifacts (and the artifacts most suceptible to cause troble to machine vision algorithms) appear on block boundaries.
Short answer:
a.
2D-FFT your image
b. Use the value of the 8-pixel period response in X and Y direction as your quality metric.
The higher, the worse the quality.
This is a crude 1st approximation but works.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723837</id>
	<title>Neural network!</title>
	<author>Anonymous</author>
	<datestamp>1247743740000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Compress a bunch of original images with variable quality, noise, etc.</p><p>Go through this set of images (where you know which one is "best") and train it to return two booleans, one for match/no-match, another for first better or second better.</p><p>Slow to train, but you can use GPGPU for massive speedups.</p></htmltext>
<tokenext>Compress a bunch of original images with variable quality , noise , etc.Go through this set of images ( where you know which one is " best " ) and train it to return two booleans , one for match/no-match , another for first better or second better.Slow to train , but you can use GPGPU for massive speedups .</tokentext>
<sentencetext>Compress a bunch of original images with variable quality, noise, etc.Go through this set of images (where you know which one is "best") and train it to return two booleans, one for match/no-match, another for first better or second better.Slow to train, but you can use GPGPU for massive speedups.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725031</id>
	<title>Re:Try compressing both further</title>
	<author>4D6963</author>
	<datestamp>1247752020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Good ideas, although I suppose you could combine the two ideas. Recompress both images using the same high quality settings, and if you'll assume that the JPEG algorithm will have an easier time compressing what's already been damaged (after all why not, doesn't it work by discarding spectral components? Therefore if more are already discarded from the start it should compress it better) and compare the file sizes.

</p><p>I think it should work for most cases, and the nice thing is you can make it work with a mere bash script and ImageMagick's convert command.</p></htmltext>
<tokenext>Good ideas , although I suppose you could combine the two ideas .
Recompress both images using the same high quality settings , and if you 'll assume that the JPEG algorithm will have an easier time compressing what 's already been damaged ( after all why not , does n't it work by discarding spectral components ?
Therefore if more are already discarded from the start it should compress it better ) and compare the file sizes .
I think it should work for most cases , and the nice thing is you can make it work with a mere bash script and ImageMagick 's convert command .</tokentext>
<sentencetext>Good ideas, although I suppose you could combine the two ideas.
Recompress both images using the same high quality settings, and if you'll assume that the JPEG algorithm will have an easier time compressing what's already been damaged (after all why not, doesn't it work by discarding spectral components?
Therefore if more are already discarded from the start it should compress it better) and compare the file sizes.
I think it should work for most cases, and the nice thing is you can make it work with a mere bash script and ImageMagick's convert command.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723575</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725841</id>
	<title>Re:tineye?</title>
	<author>Binary Boy</author>
	<datestamp>1247761740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The image toolkit TinEye is based on (Piximilar) is far more powerful even than TinEye. Awesome stuff, one of the best commercial CBIR engines I've seen.</p><p>If you just want to group near-identical images, which vary only by minor processing - resolution, minor color correction - there are simple, low-end tools that can do this easily.  imgseek is open source and works pretty well; I also use the Windows-based VSDIF, which isn't bad for finding duplicates in various formats, scales, and color spaces (I use it for deduplicating image libraries - the corporate edition has a command line interface). Both of these tools have limits when it comes to cropping, non-right-angle rotations, whereas Piximilar and some of its competitors can handle pretty radically modified images, or recognize individual components of larger images.</p></htmltext>
<tokenext>The image toolkit TinEye is based on ( Piximilar ) is far more powerful even than TinEye .
Awesome stuff , one of the best commercial CBIR engines I 've seen.If you just want to group near-identical images , which vary only by minor processing - resolution , minor color correction - there are simple , low-end tools that can do this easily .
imgseek is open source and works pretty well ; I also use the Windows-based VSDIF , which is n't bad for finding duplicates in various formats , scales , and color spaces ( I use it for deduplicating image libraries - the corporate edition has a command line interface ) .
Both of these tools have limits when it comes to cropping , non-right-angle rotations , whereas Piximilar and some of its competitors can handle pretty radically modified images , or recognize individual components of larger images .</tokentext>
<sentencetext>The image toolkit TinEye is based on (Piximilar) is far more powerful even than TinEye.
Awesome stuff, one of the best commercial CBIR engines I've seen.If you just want to group near-identical images, which vary only by minor processing - resolution, minor color correction - there are simple, low-end tools that can do this easily.
imgseek is open source and works pretty well; I also use the Windows-based VSDIF, which isn't bad for finding duplicates in various formats, scales, and color spaces (I use it for deduplicating image libraries - the corporate edition has a command line interface).
Both of these tools have limits when it comes to cropping, non-right-angle rotations, whereas Piximilar and some of its competitors can handle pretty radically modified images, or recognize individual components of larger images.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724113</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723651</id>
	<title>Re:File size</title>
	<author>Anonymous</author>
	<datestamp>1247742840000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>File size doesn't tell you everything about quality.</p><p>For instance, if you save an image as a JPEG vs. first saving as a dithered GIF and \_then\_ saving as JPEG, then the second one will have much worse actual quality, even if it has the same filesize (it may well have worse quality AND have a larger file size).</p></htmltext>
<tokenext>File size does n't tell you everything about quality.For instance , if you save an image as a JPEG vs. first saving as a dithered GIF and \ _then \ _ saving as JPEG , then the second one will have much worse actual quality , even if it has the same filesize ( it may well have worse quality AND have a larger file size ) .</tokentext>
<sentencetext>File size doesn't tell you everything about quality.For instance, if you save an image as a JPEG vs. first saving as a dithered GIF and \_then\_ saving as JPEG, then the second one will have much worse actual quality, even if it has the same filesize (it may well have worse quality AND have a larger file size).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28730659</id>
	<title>Sorting steps to find originals</title>
	<author>rwa2</author>
	<datestamp>1247848980000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>You probably don't necessarily want to find the "best quality" image, but rather the image that was closest to the original.</p><p>I take it you're either trying to eliminate the low-quality duplicates or thumbnails from a really large collection of pr0n, or trying to write an image search engine that tries to present the "best" rendition of a particular image first.</p><ol><li>As a quick first pass (after you've run through to collect all the similar images into separate groups), you'd obviously want to find the version of the image with the highest resolution.  This might let you easily throw out thumbnails or scaled down versions you might come across.  Of course, some dorks will upscale images and post them somewhere, so you might still want to hang on to some of them for the second stage.</li><li><br>For the second pass, you'd likely want to scan through the metadata first, especially stuff exposed by EXIF.  So you'd want to give higher scores to EXIF data that makes it sound like it came directly off a digital camera or scanner, and bump down the desirability of pictures that appeared to have been edited by any sort of photo editing software.</li><li><br>Then maybe you want to look at something that would rank down watermarks or other modifications.</li><li><br>Another step would be to compare compression quality, but I think that's what most of the other posts are concentrating on.  But this is a difficult step because it can be easily fooled, since idiots can re-save a low quality image with the compression quality cranked all the way up so the file size becomes high even though the actual image quality is worse than the original.  You probably need to run it through one of those "photoshop detectors" that could tell you whether the image has been through smoothing or other filters in a photo editor.  The originals (especially in raw format and maybe high quality JPEG) will have a certain type of CCD noise signature that your software might be able to detect.  In the same vein, a poorly-compressed JPEG will have lots of JPEG quantization artifacts that your software might be able to detect as well.  Otherwise, you're kinda left with zooming in on pics and eyeballing it.</li><li><br>Finally you might be left with a group of images that are exactly the same but have different file names... you probably want some way to store some of the more useful bits of descriptive text as search/tag metadata, but then choose the most consistent file naming convention or slap on your own based on your own metadata.</li></ol><p>Hopefully this gives you a start to important parts of the process that you might have overlooked...</p></htmltext>
<tokenext>You probably do n't necessarily want to find the " best quality " image , but rather the image that was closest to the original.I take it you 're either trying to eliminate the low-quality duplicates or thumbnails from a really large collection of pr0n , or trying to write an image search engine that tries to present the " best " rendition of a particular image first.As a quick first pass ( after you 've run through to collect all the similar images into separate groups ) , you 'd obviously want to find the version of the image with the highest resolution .
This might let you easily throw out thumbnails or scaled down versions you might come across .
Of course , some dorks will upscale images and post them somewhere , so you might still want to hang on to some of them for the second stage.For the second pass , you 'd likely want to scan through the metadata first , especially stuff exposed by EXIF .
So you 'd want to give higher scores to EXIF data that makes it sound like it came directly off a digital camera or scanner , and bump down the desirability of pictures that appeared to have been edited by any sort of photo editing software.Then maybe you want to look at something that would rank down watermarks or other modifications.Another step would be to compare compression quality , but I think that 's what most of the other posts are concentrating on .
But this is a difficult step because it can be easily fooled , since idiots can re-save a low quality image with the compression quality cranked all the way up so the file size becomes high even though the actual image quality is worse than the original .
You probably need to run it through one of those " photoshop detectors " that could tell you whether the image has been through smoothing or other filters in a photo editor .
The originals ( especially in raw format and maybe high quality JPEG ) will have a certain type of CCD noise signature that your software might be able to detect .
In the same vein , a poorly-compressed JPEG will have lots of JPEG quantization artifacts that your software might be able to detect as well .
Otherwise , you 're kinda left with zooming in on pics and eyeballing it.Finally you might be left with a group of images that are exactly the same but have different file names... you probably want some way to store some of the more useful bits of descriptive text as search/tag metadata , but then choose the most consistent file naming convention or slap on your own based on your own metadata.Hopefully this gives you a start to important parts of the process that you might have overlooked.. .</tokentext>
<sentencetext>You probably don't necessarily want to find the "best quality" image, but rather the image that was closest to the original.I take it you're either trying to eliminate the low-quality duplicates or thumbnails from a really large collection of pr0n, or trying to write an image search engine that tries to present the "best" rendition of a particular image first.As a quick first pass (after you've run through to collect all the similar images into separate groups), you'd obviously want to find the version of the image with the highest resolution.
This might let you easily throw out thumbnails or scaled down versions you might come across.
Of course, some dorks will upscale images and post them somewhere, so you might still want to hang on to some of them for the second stage.For the second pass, you'd likely want to scan through the metadata first, especially stuff exposed by EXIF.
So you'd want to give higher scores to EXIF data that makes it sound like it came directly off a digital camera or scanner, and bump down the desirability of pictures that appeared to have been edited by any sort of photo editing software.Then maybe you want to look at something that would rank down watermarks or other modifications.Another step would be to compare compression quality, but I think that's what most of the other posts are concentrating on.
But this is a difficult step because it can be easily fooled, since idiots can re-save a low quality image with the compression quality cranked all the way up so the file size becomes high even though the actual image quality is worse than the original.
You probably need to run it through one of those "photoshop detectors" that could tell you whether the image has been through smoothing or other filters in a photo editor.
The originals (especially in raw format and maybe high quality JPEG) will have a certain type of CCD noise signature that your software might be able to detect.
In the same vein, a poorly-compressed JPEG will have lots of JPEG quantization artifacts that your software might be able to detect as well.
Otherwise, you're kinda left with zooming in on pics and eyeballing it.Finally you might be left with a group of images that are exactly the same but have different file names... you probably want some way to store some of the more useful bits of descriptive text as search/tag metadata, but then choose the most consistent file naming convention or slap on your own based on your own metadata.Hopefully this gives you a start to important parts of the process that you might have overlooked...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28762039</id>
	<title>Re:Share your suggestions</title>
	<author>doti</author>
	<datestamp>1248084660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Yes, it takes a lot of work to organize all that porn.</p></htmltext>
<tokenext>Yes , it takes a lot of work to organize all that porn .</tokentext>
<sentencetext>Yes, it takes a lot of work to organize all that porn.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723565</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28727909</id>
	<title>Re:AI problem?</title>
	<author>buchner.johannes</author>
	<datestamp>1247835480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>You're right, it needs to be done by humans to be sure.</p></div><p>I bet this is how "Hot or not" et. al. came to life</p></div>
	</htmltext>
<tokenext>You 're right , it needs to be done by humans to be sure.I bet this is how " Hot or not " et .
al. came to life</tokentext>
<sentencetext>You're right, it needs to be done by humans to be sure.I bet this is how "Hot or not" et.
al. came to life
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724025</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723591</id>
	<title>Re:AI problem?</title>
	<author>Anonymous</author>
	<datestamp>1247742600000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><blockquote><div><p>...it will simply require a human-level brain.</p></div></blockquote><p>How about Amazon's Mechanical Turk service?<br><a href="https://www.mturk.com/" title="mturk.com">https://www.mturk.com/</a> [mturk.com]</p></div>
	</htmltext>
<tokenext>...it will simply require a human-level brain.How about Amazon 's Mechanical Turk service ? https : //www.mturk.com/ [ mturk.com ]</tokentext>
<sentencetext>...it will simply require a human-level brain.How about Amazon's Mechanical Turk service?https://www.mturk.com/ [mturk.com]
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723509</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724677</id>
	<title>Re:use a "difference matte"</title>
	<author>miggyb</author>
	<datestamp>1247748780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Someone already suggested that before, and I'm not understanding it. The result would be a delta, but it wouldn't help with figuring out which one of the original two was of a higher quality. Having a delta just tells you that the two pictures are different.</htmltext>
<tokenext>Someone already suggested that before , and I 'm not understanding it .
The result would be a delta , but it would n't help with figuring out which one of the original two was of a higher quality .
Having a delta just tells you that the two pictures are different .</tokentext>
<sentencetext>Someone already suggested that before, and I'm not understanding it.
The result would be a delta, but it wouldn't help with figuring out which one of the original two was of a higher quality.
Having a delta just tells you that the two pictures are different.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723719</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521</id>
	<title>File size</title>
	<author>Anonymous</author>
	<datestamp>1247742360000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>it is lossy compression, after all . . .</p></htmltext>
<tokenext>it is lossy compression , after all .
. .</tokentext>
<sentencetext>it is lossy compression, after all .
. .</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725545</id>
	<title>Re:File size</title>
	<author>Anonymous</author>
	<datestamp>1247757060000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext>This is the kind of problem you can solve in 2 minutes with 95\% accuracy (by using file size), or never finish at all by listening to all the pedants on slashdot.  When people know a little too much they love to go on about stuff like entropy and information gain, just because they (sort of) can.
<p>
Try file size on the set of images of interest to you and see if it coincides with your intuition.  If it does, you're done.</p></htmltext>
<tokenext>This is the kind of problem you can solve in 2 minutes with 95 \ % accuracy ( by using file size ) , or never finish at all by listening to all the pedants on slashdot .
When people know a little too much they love to go on about stuff like entropy and information gain , just because they ( sort of ) can .
Try file size on the set of images of interest to you and see if it coincides with your intuition .
If it does , you 're done .</tokentext>
<sentencetext>This is the kind of problem you can solve in 2 minutes with 95\% accuracy (by using file size), or never finish at all by listening to all the pedants on slashdot.
When people know a little too much they love to go on about stuff like entropy and information gain, just because they (sort of) can.
Try file size on the set of images of interest to you and see if it coincides with your intuition.
If it does, you're done.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723707</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723693
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724171
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_37</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723509
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724977
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726381
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724141
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726999
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723509
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724061
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724765
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28727789
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723509
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725775
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723509
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723595
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725231
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723651
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725469
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723509
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724025
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28727909
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723651
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28728971
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_43</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724501
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723825
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723539
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28727981
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723539
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723715
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723539
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723929
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723813
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724453
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723511
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724925
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_48</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723651
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726863
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_42</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723693
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725747
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_38</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723707
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725545
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28731903
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_41</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723511
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726175
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723509
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723591
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724261
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723719
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28731671
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723719
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724677
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723695
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724533
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_35</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723565
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28762039
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_40</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723511
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725275
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723667
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724701
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723707
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725545
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28728661
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723695
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725553
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723707
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725545
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28734829
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_46</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723511
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725383
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_45</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723813
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724161
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_47</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724045
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725457
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726263
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723651
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724033
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723509
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724061
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28729279
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723707
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724407
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723707
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725919
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723509
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726945
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723693
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724109
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28729303
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_39</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723509
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723591
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28734353
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_44</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723695
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725663
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723509
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724061
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724765
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726779
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723707
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724137
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_36</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723707
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724995
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723789
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723575
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725031
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_2154238_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723565
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724113
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725841
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_2154238.18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723719
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724677
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28731671
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_2154238.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723693
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724171
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724109
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28729303
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725747
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_2154238.20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726235
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_2154238.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723575
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725031
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_2154238.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723539
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28727981
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723715
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723929
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_2154238.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723729
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_2154238.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724933
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_2154238.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723555
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_2154238.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723521
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723789
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723825
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725231
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726263
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724501
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723651
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726863
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725469
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28728971
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724033
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723707
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725545
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28728661
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28734829
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28731903
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724995
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724137
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724407
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725919
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724141
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726999
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723667
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724701
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_2154238.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723511
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726175
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725383
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724925
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725275
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_2154238.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723537
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_2154238.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723695
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724533
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725553
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725663
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_2154238.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724251
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_2154238.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723563
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_2154238.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724165
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_2154238.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723567
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_2154238.16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724045
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725457
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_2154238.19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723565
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28762039
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724113
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725841
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_2154238.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723813
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724161
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724453
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_2154238.17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723509
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723591
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724261
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28734353
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724061
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724765
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726779
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28727789
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28729279
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724977
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726381
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28726945
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28723595
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28725775
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724025
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28727909
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_2154238.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_2154238.28724273
</commentlist>
</conversation>
